Hive Documentation
Complete reference for Hive — the native Rust desktop AI platform.
Hive is a native Rust desktop AI platform built on GPUI — no Electron, no web wrappers. It unifies a development environment, a personal assistant framework, and a security-first architecture into a single application. Instead of one chatbot, Hive runs a multi-agent swarm that can plan, build, test, and orchestrate workflows while learning your preferences over time — all while ensuring no secret or PII ever leaves your machine without approval.
What makes Hive different: it learns from every interaction (locally, privately), it remembers context across conversations via LanceDB vector memory, it detects its own knowledge gaps and autonomously researches and acquires new skills, and it federates across instances for distributed swarm execution.
The Three Pillars
Development Excellence
- •Multi-agent swarm (Queen + teams)
- •15 AI providers with dynamic model fetching
- •LanceDB vector memory (chunks + durable memories)
- •Embedding providers (OpenAI + Ollama)
- •Background code indexing with change detection
- •Git worktree isolation per team
- •Full Git Ops (commits, PRs, branches, gitflow, LFS)
- •Context engine (TF-IDF scoring + RAG + vector search)
- •Cost tracking & budget enforcement
- •Code review & testing automation
- •Skills Marketplace (15 built-in + user-created skills)
- •Universal Skills — cross-model TOML skill sharing
- •Autonomous skill acquisition
- •Automation workflows (cron, event, webhook triggers)
- •Docker sandbox with real CLI integration
- •MCP client + server with UI automation
- •Remote control (WebSocket relay, QR pairing, web UI)
- •Terminal CLI client (chat, sync, config, models)
- •P2P federation across instances
Assistant Excellence
- •Email triage & AI-powered drafting
- •Calendar integration & daily briefings
- •Reminders (time, recurring cron, event-triggered)
- •Approval workflows with audit trails
- •Document generation (7 formats)
- •Smart home control
- •Voice assistant (wake word + intent)
Safety Excellence
- •PII detection (11+ types)
- •Secrets scanning with risk levels
- •Vulnerability assessment
- •SecurityGateway command filtering
- •Memory flush — extract insights before context compaction
- •Encrypted storage (AES-256-GCM)
- •Provider trust-based access control
- •Local-first — no telemetry
Multi-Agent System
Hive does not use a single AI agent. It uses a hierarchical swarm modeled on a beehive:
+-------------+
| QUEEN | Meta-coordinator
| (Planning) | Goal decomposition
+------+------+ Budget enforcement
| Cross-team synthesis
+------------+------------+
| | |
+-----v----+ +----v-----+ +----v-----+
| TEAM 1 | | TEAM 2 | | TEAM 3 |
| HiveMind | |Coordinator| |SingleShot|
+----+-----+ +----+-----+ +----------+
| |
+-----+-----+ +---+---+
| | | | |
Arch Code Rev Inv ImplQueen
Decomposes high-level goals into team objectives with dependency ordering, dispatches teams with the appropriate orchestration mode, enforces budget and time limits, shares cross-team insights, synthesizes results, and records learnings to collective memory.
HiveMind Teams
Use specialized agents — Architect, Coder, Reviewer, Tester, Security — that reach consensus through structured debate.
Coordinator Teams
Decompose work into dependency-ordered tasks (investigate, implement, verify) with persona-specific prompts.
Every team gets its own git worktree (swarm/{run_id}/{team_id}) for conflict-free parallel execution, merging back on completion.
AI Providers
15 providers with dynamic model fetching, automatic complexity-based routing, and fallback:
Cloud
- Anthropic (Claude Opus 4.6/4.5/4.1/4, Sonnet 4.5/4)
- OpenAI (GPT-4 Turbo, GPT-4o, GPT-4 Mini)
- Google (Gemini 3.1 Pro/Flash + Search Grounding & Code Execution)
- OpenRouter (100+ models)
- Groq (fast inference)
- xAI (Grok)
- Venice AI
- Mistral
- Doubao
- HuggingFace
- HiveGateway (cloud AI proxy)
Local
- Ollama
- LM Studio
- Generic OpenAI-compatible
- LiteLLM proxy
Features: dynamic model registry fetching from provider APIs, complexity classification, 14-entry fallback chain, per-model cost tracking, streaming support, budget enforcement.
Dynamic Model Fetching
All providers now fetch their available models dynamically from provider APIs — no hardcoded model lists. Google Gemini models are automatically injected with Search Grounding and Code Execution tools. Anthropic requests include beta agentic headers for extended capabilities.
Streaming
All AI responses stream token-by-token through the UI. Streaming is implemented end-to-end: SSE parsing at the provider layer, async channel transport, and incremental UI rendering. Shell output streams in real time through async mpsc channels. WebSocket-based P2P transport supports bidirectional streaming between federated instances.
Vector Memory & RAG
Hive maintains durable, searchable memory across conversations using LanceDB — a columnar vector database that runs embedded (no external server).
User query
|
v
EmbeddingProvider ─── OpenAI text-embedding-3-small (1536d)
| or Ollama nomic-embed-text (768d)
| with automatic fallback
v
┌─────────────────────────────────────────────┐
│ LanceDB MemoryStore │
│ │
│ chunks table memories table │
│ ┌──────────────┐ ┌──────────────────┐ │
│ │ source_file │ │ content │ │
│ │ content │ │ category │ │
│ │ embedding │ │ importance │ │
│ │ start_line │ │ embedding │ │
│ │ end_line │ │ conversation_id │ │
│ └──────────────┘ │ decay_exempt │ │
│ └──────────────────┘ │
└─────────────────────────────────────────────┘
| |
v v
Code search results Recalled memories
| |
v v
# Retrieved Context # Recalled Memories
(injected as system msg) (dedicated system msg)How It Works
| Component | Purpose |
|---|---|
| EmbeddingProvider | Async trait with OpenAI, Ollama, and Mock implementations. Auto-detects available provider at startup. |
| MemoryStore | LanceDB wrapper managing two tables — chunks (indexed code) and memories (durable insights). Vector similarity search via nearest-neighbor queries. |
| HiveMemory | Unified API combining MemoryStore + embeddings. 50-line chunks with 10-line overlap for code indexing. Combined query returns both code chunks and recalled memories. |
| BackgroundIndexer | Walks project directories recursively, skips binary/hidden/vendor dirs, tracks file hashes for incremental re-indexing. Triggered on workspace open. |
| MemoryExtractor | Pre-compaction flush: builds an LLM prompt to extract key insights from a conversation before context window is compacted. Parses JSON response, filters by importance threshold. |
Memory Categories
Memories are categorized for relevance scoring:
- UserPreference — Coding style, tooling choices, formatting preferences
- CodePattern — Recurring patterns and architectural decisions
- TaskProgress — What was done, what's pending
- Decision — Technical decisions and their rationale
- General — Catch-all for other durable insights
Injection Flow
On every message send: (1) Code chunks from the indexed workspace are injected into # Retrieved Context (curated by ContextEngine). (2) Recalled memories from previous conversations are injected as a separate # Recalled Memories system message with importance scores and categories. (3) Both are available to the AI alongside the conversation history.
Autonomous Skill Acquisition
Hive doesn't just execute what it already knows — it recognizes what it doesn't know and teaches itself. This is the closed-loop system that lets Hive grow its own capabilities in real time:
User request
|
v
Competence Detection ─── "I know this" ───> Normal execution
|
"I don't know this"
|
v
Search Skills / Sources ─── Found sufficient skill? ───> Install & use
|
Not found (or insufficient)
|
v
Knowledge Acquisition ───> Fetch docs, parse, synthesize
|
v
Skill Authoring Pipeline ───> Generate, security-scan, test, install
|
v
New skill available for future requestsCompetence Detection
The CompetenceDetector scores confidence on every incoming request using a weighted formula across four signals: skill match (30%), pattern match (20%), memory match (15%), and AI assessment (35%). When confidence drops below the learning threshold (default 0.4), the system identifies competence gaps and triggers the acquisition pipeline automatically.
Knowledge Acquisition
The KnowledgeAcquisitionAgent autonomously identifies documentation URLs, fetches pages via HTTPS with domain allowlisting and private-IP blocking, parses HTML to clean text, caches locally with SHA-256 hashing and 7-day TTL, synthesizes knowledge via AI, and injects results into the ContextEngine. Security: HTTPS-only, 23+ allowlisted documentation domains, private IP rejection, content scanned for injection before storage.
Skill Authoring Pipeline
When no existing skill is found: 1) Search existing skills first — each candidate is AI-scored for sufficiency (0-10). Skills scoring >= 7 are installed directly. 2) Research via KnowledgeAcquisitionAgent. 3) AI generates a skill definition. 4) Security scan (same 6-category injection scan as community skills). 5) Test validation. 6) Install with /hive- trigger prefix, disabled by default.
File-Based User Skills
Skills are stored as TOML files with capability tags in ~/.hive/skills/. The SkillsRegistry provides CRUD operations (create, update, delete, toggle) with integrity hashing on every write. Skills are dispatched via /command alongside built-in skills.
[skill]
name = "my-review-skill"
description = "Reviews code for common bugs"
category = "CodeQuality"
enabled = true
[requirements]
capabilities = ["tool_use"]
[prompt]
template = "You are a code review assistant. Check for null pointer dereferences, off-by-one errors, and resource leaks."
[tools]
required = ["read_file"]Personal Assistant
The assistant uses the same AI infrastructure as the development platform — same model routing, same security scanning, same learning loop.
| Capability | Details |
|---|---|
| Gmail and Outlook inbox polling via real REST APIs. Email digest generation, AI-powered composition and reply drafting with shield-scanned outbound content. | |
| Calendar | Google Calendar and Outlook event fetching, daily briefing generation, conflict detection and scheduling logic. |
| Reminders | Time-based, recurring (cron), and event-triggered. Snooze/dismiss. Project-scoped. Native OS notifications. SQLite persistence. |
| Approvals | Multi-level workflows (Low / Medium / High / Critical). Submit, approve, reject with severity tracking. |
| Documents | Generate CSV, DOCX, XLSX, HTML, Markdown, PDF, and PPTX from templates or AI. |
| Smart Home | Philips Hue control — lighting scenes, routines, individual light states. |
| Plugins | AssistantPlugin trait for community extensibility. |
| Scheduler | Background task scheduler with 60-second tick driver. Automated reminders, recurring tasks, and cron-based scheduling with native OS thread. |
Security & Privacy
Security is the foundation, not a feature bolted on. Every outgoing message is scanned. Every command is validated.
HiveShield — 4 Layers of Protection
| Layer | What It Does |
|---|---|
| PII Detection | 11+ types (email, phone, SSN, credit card, IP, name, address, DOB, passport, driver's license, bank account). Cloaking modes: Placeholder, Hash, Redact. |
| Secrets Scanning | API keys, tokens, passwords, private keys. Risk levels: Critical, High, Medium, Low. |
| Vulnerability Assessment | Prompt injection detection, jailbreak attempts, unsafe code patterns, threat scoring. |
| Access Control | Policy-based data classification. Provider trust levels: Local, Trusted, Standard, Untrusted. |
SecurityGateway
The SecurityGateway validates every command execution path against 14 dangerous command patterns, 3 SQL injection patterns, and risky constructs (eval, command substitution, backticks). It enforces HTTPS for all URLs, blocks private IP ranges, maintains a domain allowlist (GitHub, npm, crates.io, etc.), and prevents access to sensitive paths (.ssh, .aws, .gnupg, /etc/shadow, /etc/passwd).
Guardian
AI output safety scanning — real-time content analysis across 6 categories (PII leakage, prompt injection, harmful content, bias detection, hallucination risk, and compliance violations) with configurable thresholds and blocking modes.
Local-First
All data in ~/.hive/ — config, conversations, learning data, collective memory, kanban boards, vector memory. Encrypted key storage (AES-256-GCM + Argon2id key derivation). No telemetry. No analytics. No cloud dependency. Cloud providers used only for AI inference when you choose cloud models — and even then, HiveShield scans every request.
Self-Improvement Engine
Hive gets smarter every time you use it. Entirely local. No data leaves your machine.
| System | Function |
|---|---|
| Outcome Tracker | Quality scores per model and task type. Edit distance and follow-up penalties. |
| Routing Learner | EMA analysis adjusts model tier selection. Wired into ModelRouter via TierAdjuster. |
| Preference Model | Bayesian confidence tracking. Learns tone, detail level, formatting from observation. |
| Prompt Evolver | Versioned prompts per persona. Quality-gated refinements with rollback support. |
| Pattern Library | Extracts code patterns from accepted responses (6 languages: Rust, Python, JS/TS, Go, Java/Kotlin, C/C++). |
| Self-Evaluator | Comprehensive report every 200 interactions. Trend analysis, misroute rate, cost-per-quality-point. |
All learning data stored locally in SQLite (~/.hive/learning.db). Every preference is transparent, reviewable, and deletable.
Automation & Skills
| Feature | Details |
|---|---|
| Automation Workflows | Multi-step workflows with triggers (manual, cron schedule, event, webhook) and 6 action types. YAML-based definitions. Visual drag-and-drop workflow builder in the UI. |
| Skills Marketplace | Browse, install, remove, and toggle skills from 5 sources (ClawdHub, Anthropic, OpenAI, Google, Community). Create custom skills. Add remote skill sources. 15 built-in skills. Security scanning on install. |
| User Skills | Universal Skills — cross-model TOML skill definitions in ~/.hive/skills/. Capability tags adapt execution per model. Full CRUD with SHA-256 integrity. Dispatched via /command in chat. |
| Autonomous Skill Creation | When Hive encounters an unfamiliar domain, it searches existing skill sources first, then researches documentation and authors a new skill if nothing sufficient exists. |
| Personas | Named agent personalities with custom system prompts, prompt overrides per task type, and configurable model preferences. |
| Auto-Commit | Watches for staged changes and generates AI-powered commit messages. |
| Daily Standups | Automated agent activity summaries across all teams and workflows. |
| Voice Assistant | Wake-word detection, natural-language voice commands, intent recognition, TTS auto-speak for AI responses, and state-aware responses. |
Terminal & Execution
| Feature | Details |
|---|---|
| Shell Execution | Run commands with configurable timeout, async streaming output capture, working directory management, and exit code tracking. |
| Docker Sandbox | Full container lifecycle: create, start, stop, exec, pause, unpause, remove. Real Docker CLI integration with simulation fallback. |
| Browser Automation | Chrome DevTools Protocol over WebSocket: navigation, screenshots, JavaScript evaluation, DOM manipulation. |
| MCP Server | Full Model Context Protocol integration with enhanced tool use. Enables structured tool calling across all major providers. |
| UI Automation | Cross-platform screen interaction via enigo — mouse, keyboard, and screen control for automated workflows. |
| CLI Service | Built-in commands (/doctor, /clear, etc.) and system health checks. |
| Local AI Detection | Auto-discovers Ollama, LM Studio, and llama.cpp running on localhost. |
Remote Control
Hive can be controlled from any device on your network — phone, tablet, or another computer — via a built-in web server with real-time streaming.
| Feature | Details |
|---|---|
| HiveDaemon | Background service holding canonical session state (active panel, conversation, agent runs). Broadcasts events to all connected clients. |
| Web Server | Axum-based HTTP server with embedded static assets. REST API endpoints: /api/state, /api/chat, /api/panels, /api/agents. |
| WebSocket Streaming | Real-time bidirectional communication for chat streaming, agent status, and panel updates. |
| QR Pairing | Secure device pairing via QR code using X25519 elliptic-curve key exchange + AES-GCM encryption. No passwords needed. |
| Session Journal | JSONL-based persistence of daemon events for crash recovery and audit. |
| Event Types | SendMessage, SwitchPanel, StartAgentTask, CancelAgentTask, StreamChunk, StreamComplete, AgentStatus, StateSnapshot, PanelData. |
Terminal Client (CLI)
A full-featured terminal client for interacting with Hive without the GUI:
hive chat # Interactive chat TUI with streaming
hive chat --model claude-3 # Chat with a specific model
hive models # List available AI models
hive status # Show account tier, usage, sync status
hive sync push # Push data to cloud
hive sync pull # Pull data from cloud
hive config # View all config
hive config key value # Set a config value
hive login # Authenticate with Hive Cloud
hive remote # Show remote connection statusBuilt with Ratatui for a polished terminal UI with keyboard navigation, scrolling, and real-time streaming display.
Hive Cloud
Optional cloud services for users who want cross-device sync, cloud AI routing, and team features:
| Feature | Details |
|---|---|
| HiveGateway | Cloud AI proxy — routes requests through Hive's infrastructure for users without their own API keys. Configurable in Settings. |
| Cloud Sync | Push/pull config, conversations, and learning data across devices via encrypted blob storage. |
| Admin TUI | Terminal dashboard (hive-admin) with 6 tabs: Dashboard, Users, Gateway, Relay, Sync, Teams. Real-time refresh. |
| Relay | Event relay service for remote control across networks (not just LAN). |
Cloud features are entirely optional. Hive works fully offline with local models.
P2P Federation
Hive instances can discover and communicate with each other over the network, enabling distributed swarm execution and shared learning.
| Feature | Details |
|---|---|
| Peer Discovery | UDP broadcast for automatic LAN discovery, plus manual bootstrap peers. |
| WebSocket Transport | Bidirectional P2P connections with split-sink/stream architecture. |
| Typed Protocol | 12 built-in message kinds (Hello, Welcome, Heartbeat, TaskRequest, TaskResult, AgentRelay, ChannelSync, FleetLearn, StateSync, etc.) plus extensible custom types. |
| Channel Sync | Synchronize agent channel messages across federated instances. |
| Fleet Learning | Share learning outcomes across a distributed fleet of nodes. |
| Peer Registry | Persistent tracking of known peers with connection state management. |
Integrations
All integrations make real API calls — no stubs or simulated backends.
| Platform | Services |
|---|---|
| Gmail (REST API), Calendar, Contacts, Drive, Docs, Sheets, Tasks | |
| Microsoft | Outlook Email (Graph v1.0), Outlook Calendar |
| Messaging | Slack (Web API), Discord, Teams, Telegram, Matrix, WebChat, WhatsApp (Business Cloud API), Signal, Google Chat, iMessage (macOS) |
| Git Hosting | GitHub (REST API), GitLab (REST API), Bitbucket (REST API v2.0) |
| Cloud Platforms | AWS (EC2, S3, Lambda, CloudWatch), Azure (VMs, Blob, Functions, Monitor), GCP (Compute, Storage, Functions, Logging) |
| Cloud Services | Cloudflare, Vercel, Supabase |
| Databases | PostgreSQL, MySQL, SQLite — query execution, schema introspection, connection pooling |
| DevOps | Docker (full container lifecycle), Kubernetes (pods, deployments, services, logs) |
| Knowledge | Notion (pages, databases, blocks), Obsidian (vault management, frontmatter) |
| Project Management | Jira (issues, projects, transitions), Linear (issues, teams, cycles), Asana (tasks, projects, sections) |
| Smart Home | Philips Hue |
| Voice | ClawdTalk (voice-over-phone via Telnyx) |
| Protocol | MCP client + server, OAuth2 (PKCE), Webhooks, P2P federation |
Blockchain / Web3
| Chain | Features |
|---|---|
| EVM (Ethereum, Polygon, Arbitrum, BSC, Avalanche, Optimism, Base) | Wallet management, real JSON-RPC, per-chain RPC configuration, ERC-20 token deployment with cost estimation. |
| Solana | Wallet management, real JSON-RPC, SPL token deployment with rent cost estimation. |
| Security | Encrypted private key storage (AES-256-GCM), no keys ever sent to AI providers. |
Persistence & Data Storage
All state persists between sessions. Nothing is lost on restart.
| Data | Storage | Location |
|---|---|---|
| Conversations | SQLite + JSON files | ~/.hive/memory.db + conversations/ |
| Messages | SQLite | ~/.hive/memory.db |
| Conversation search | SQLite FTS5 | ~/.hive/memory.db (Porter stemming) |
| Cost records | SQLite | ~/.hive/memory.db |
| Application logs | SQLite | ~/.hive/memory.db |
| Collective memory | SQLite (WAL mode) | ~/.hive/memory.db |
| Learning data | SQLite | ~/.hive/learning.db |
| Kanban boards | JSON | ~/.hive/kanban.json |
| Config & API keys | JSON + encrypted vault | ~/.hive/config.json |
| Vector memory | LanceDB | ~/.hive/memory.lance |
| Session state | JSON | ~/.hive/session.json |
| Session journal | JSONL | ~/.hive/session_journal.jsonl |
| Knowledge cache | HTML/text files | ~/.hive/knowledge/ |
| Workflows | YAML definitions | ~/.hive/workflows/ |
| Skills | TOML with capability tags | ~/.hive/skills/{name}.toml |
On startup, Hive automatically backfills JSON-only conversations into SQLite and builds FTS5 search indexes. Path traversal protection on all file operations. SQLite databases use WAL mode with NORMAL synchronous and foreign key enforcement.
Architecture — 19-Crate Workspace
hive/crates/
├── hive_app Binary entry point — window, tray, build.rs (winres)
├── hive_ui Workspace shell, chat service, learning bridge, title/status bars
├── hive_ui_core Theme, actions, globals, sidebar, welcome screen
├── hive_ui_panels All panel implementations (23 panels)
├── hive_core Config, SecurityGateway, persistence (SQLite), Kanban, channels, scheduling
├── hive_ai 15 AI providers, capability-aware router, complexity classifier, context engine,
│ RAG, embeddings (OpenAI + Ollama), LanceDB memory, background indexer
├── hive_agents Queen, HiveMind, Coordinator, collective memory, MCP (19 tools),
│ Universal Skills (SkillLoader + SkillExecutor, TOML persistence),
│ competence detection, skill authoring
├── hive_shield PII detection, secrets scanning, vulnerability assessment, access control
├── hive_learn Outcome tracking, routing learner, preference model, prompt evolution
├── hive_assistant Email, calendar, reminders, approval workflows, daily briefings
├── hive_fs File operations, git integration, file watchers, search
├── hive_terminal Command execution, Docker sandbox, browser automation, local AI detection
├── hive_docs Document generation — CSV, DOCX, XLSX, HTML, Markdown, PDF, PPTX
├── hive_blockchain EVM + Solana wallets, RPC config, token deployment with real JSON-RPC
├── hive_integrations Google, Microsoft, GitHub, GitLab, Bitbucket, messaging, databases,
│ cloud platforms, DevOps, knowledge bases, project management, OAuth2
├── hive_network P2P federation, WebSocket transport, UDP discovery, peer registry, sync
├── hive_remote Background daemon, WebSocket relay, QR pairing, web UI, REST API
├── hive_admin Cloud admin TUI dashboard (6 tabs) [binary: hive-admin]
└── hive_cli Terminal AI client (chat, sync, config, models) [binary: hive]Dependency Flow
hive_app
└── hive_ui
├── hive_ui_core
├── hive_ui_panels
├── hive_ai ──────── hive_core
├── hive_agents ──── hive_ai, hive_learn, hive_core
├── hive_shield
├── hive_learn ───── hive_core
├── hive_assistant ─ hive_core, hive_ai
├── hive_fs
├── hive_terminal
├── hive_docs
├── hive_blockchain
├── hive_integrations
├── hive_network
└── hive_remote ──── hive_core, hive_ai, hive_agents, hive_network
hive_admin (standalone binary)
hive_cli (standalone binary) ── hive_coreUI — 23 Panels
All panels are wired to live backend data. No mock data in the production path. 8 built-in themes with community voting and custom theme support.
| Panel | Description |
|---|---|
| Chat | Main AI conversation with streaming responses |
| History | Conversation history browser with full-text search |
| Files | Project file browser with create/delete/navigate |
| Specs | Specification management |
| Agents | Multi-agent swarm orchestration with task tree drill-down |
| Workflows | Visual workflow builder (drag-and-drop nodes) |
| Channels | Agent messaging channels (Telegram/Slack-style) |
| Kanban | Persistent task board with drag-and-drop |
| Monitor | Real-time system monitoring (CPU, RAM, disk, provider status) |
| Logs | Application logs viewer with level filtering |
| Costs | AI cost tracking with historical graphs and CSV export |
| Git Ops | Full git workflow: staging, commits, push, PRs, branches, gitflow, LFS |
| Skills | Universal skill marketplace: browse, install, remove, toggle, create — cross-model TOML skills |
| Routing | Model routing configuration |
| Models | Unified model discovery and selection across all providers |
| Learning | Self-improvement dashboard with metrics, preferences, insights |
| Shield | Interactive privacy controls and security scanning |
| Assistant | Personal assistant: email, calendar, reminders, approvals |
| Review | Code review management |
| Network | Peer collaboration and P2P connectivity |
| Token Launch | Token deployment wizard with chain selection |
| Settings | Application configuration with persist-on-save, cloud account fields |
| Help | Documentation and guides |
Installation
Option 1: Download Pre-Built Binary (Recommended)
Grab the latest release for your platform from GitHub Releases.
| Platform | Download | Requirements |
|---|---|---|
| Windows (x64) | hive-windows-x64.zip | Windows 10/11, GPU with DirectX 12 |
| macOS (Apple Silicon) | hive-macos-arm64.tar.gz | macOS 12+, Metal-capable GPU |
| Linux (x64) | hive-linux-x64.tar.gz | Vulkan-capable GPU + drivers |
Windows: Extract the zip, run hive.exe. No installer needed.
macOS: Extract, then chmod +x hive && ./hive (or move to /usr/local/bin/).
Linux: Extract, then chmod +x hive && ./hive. Vulkan drivers required.
# Ubuntu/Debian
sudo apt install mesa-vulkan-drivers vulkan-tools
# Fedora
sudo dnf install mesa-vulkan-drivers vulkan-tools
# Arch
sudo pacman -S vulkan-icd-loader vulkan-toolsOption 2: Build from Source
Requires Rust toolchain, protoc (Protocol Buffers compiler), and platform-specific build dependencies (see README for full details).
# Install protoc (required for gRPC)
# macOS: brew install protobuf
# Ubuntu: sudo apt install protobuf-compiler
# Windows: choco install protoc
git clone https://github.com/PatSul/Hive.git
cd Hive/hive
cargo build --release
cargo run --releaseRun Tests
cd hive
cargo test --workspaceConfiguration
On first launch, Hive creates ~/.hive/config.json. Add your API keys to enable cloud providers:
{
"anthropic_api_key": "sk-ant-...",
"openai_api_key": "sk-...",
"google_api_key": "AIza...",
"ollama_url": "http://localhost:11434",
"lmstudio_url": "http://localhost:1234"
}All keys are stored locally and never transmitted except to their respective providers. HiveShield scans every outbound request before it leaves your machine. Configure provider preferences, model routing rules, budget limits, and security policies through the Settings panel in the UI.