Hive Documentation
Complete reference for Hive — the native Rust desktop AI platform.
Hive is a native Rust desktop AI platform built on GPUI — no Electron, no web wrappers. It unifies a development environment, a personal assistant framework, and a security-first architecture into a single application. Instead of one chatbot, Hive runs a multi-agent swarm that can plan, build, test, and orchestrate workflows while learning your preferences over time — all while ensuring no secret or PII ever leaves your machine without approval.
What makes Hive different: it learns from every interaction (locally, privately), it detects its own knowledge gaps and autonomously researches and acquires new skills, and it federates across instances for distributed swarm execution.
The Three Pillars
Development Excellence
- •Multi-agent swarm (Queen + teams)
- •11 AI providers with auto-routing
- •Git worktree isolation per team
- •Full Git Ops (commits, PRs, branches, gitflow, LFS)
- •Context engine (TF-IDF scoring + RAG)
- •Cost tracking & budget enforcement
- •Code review & testing automation
- •Skills Marketplace (34+ skills from 5 sources)
- •Autonomous skill acquisition
- •Automation workflows (cron, event, webhook triggers)
- •Docker sandbox with real CLI integration
- •MCP client + server
- •P2P federation across instances
Assistant Excellence
- •Email triage & AI-powered drafting
- •Calendar integration & daily briefings
- •Reminders (time, recurring cron, event-triggered)
- •Approval workflows with audit trails
- •Document generation (7 formats)
- •Smart home control
- •Voice assistant (wake word + intent)
Safety Excellence
- •PII detection (11+ types)
- •Secrets scanning with risk levels
- •Vulnerability assessment
- •SecurityGateway command filtering
- •Encrypted storage (AES-256-GCM)
- •Provider trust-based access control
- •Local-first — no telemetry
Multi-Agent System
Hive does not use a single AI agent. It uses a hierarchical swarm modeled on a beehive:
+-------------+
| QUEEN | Meta-coordinator
| (Planning) | Goal decomposition
+------+------+ Budget enforcement
| Cross-team synthesis
+------------+------------+
| | |
+-----v----+ +----v-----+ +----v-----+
| TEAM 1 | | TEAM 2 | | TEAM 3 |
| HiveMind | |Coordinator| |SingleShot|
+----+-----+ +----+-----+ +----------+
| |
+-----+-----+ +---+---+
| | | | |
Arch Code Rev Inv ImplQueen
Decomposes high-level goals into team objectives with dependency ordering, dispatches teams with the appropriate orchestration mode, enforces budget and time limits, shares cross-team insights, synthesizes results, and records learnings to collective memory.
HiveMind Teams
Use specialized agents — Architect, Coder, Reviewer, Tester, Security — that reach consensus through structured debate.
Coordinator Teams
Decompose work into dependency-ordered tasks (investigate, implement, verify) with persona-specific prompts.
Every team gets its own git worktree (swarm/{run_id}/{team_id}) for conflict-free parallel execution, merging back on completion.
AI Providers
11 providers with automatic complexity-based routing and fallback:
Cloud
- Anthropic (Claude)
- OpenAI (GPT)
- Google (Gemini)
- OpenRouter (100+ models)
- Groq (fast inference)
- HuggingFace
Local
- Ollama
- LM Studio
- Generic OpenAI-compatible
- LiteLLM proxy
Features: complexity classification, 14-entry fallback chain, per-model cost tracking, streaming support, budget enforcement.
Streaming
All AI responses stream token-by-token through the UI. Streaming is implemented end-to-end: SSE parsing at the provider layer, async channel transport, and incremental UI rendering. Shell output streams in real time through async mpsc channels. WebSocket-based P2P transport supports bidirectional streaming between federated instances.
Autonomous Skill Acquisition
Hive doesn't just execute what it already knows — it recognizes what it doesn't know and teaches itself. This is the closed-loop system that lets Hive grow its own capabilities in real time:
User request
|
v
Competence Detection ─── "I know this" ───> Normal execution
|
"I don't know this"
|
v
Search Skills / Sources ─── Found sufficient skill? ───> Install & use
|
Not found (or insufficient)
|
v
Knowledge Acquisition ───> Fetch docs, parse, synthesize
|
v
Skill Authoring Pipeline ───> Generate, security-scan, test, install
|
v
New skill available for future requestsCompetence Detection
The CompetenceDetector scores confidence on every incoming request using a weighted formula across four signals: skill match (30%), pattern match (20%), memory match (15%), and AI assessment (35%). When confidence drops below the learning threshold (default 0.4), the system identifies competence gaps and triggers the acquisition pipeline automatically.
Knowledge Acquisition
The KnowledgeAcquisitionAgent autonomously identifies documentation URLs, fetches pages via HTTPS with domain allowlisting and private-IP blocking, parses HTML to clean text, caches locally with SHA-256 hashing and 7-day TTL, synthesizes knowledge via AI, and injects results into the ContextEngine. Security: HTTPS-only, 23+ allowlisted documentation domains, private IP rejection, content scanned for injection before storage.
Skill Authoring Pipeline
When no existing skill is found: 1) Search existing skills first — each candidate is AI-scored for sufficiency (0-10). Skills scoring >= 7 are installed directly. 2) Research via KnowledgeAcquisitionAgent. 3) AI generates a skill definition. 4) Security scan (same 6-category injection scan as community skills). 5) Test validation. 6) Install with /hive- trigger prefix, disabled by default.
Personal Assistant
The assistant uses the same AI infrastructure as the development platform — same model routing, same security scanning, same learning loop.
| Capability | Details |
|---|---|
| Gmail and Outlook inbox polling via real REST APIs. Email digest generation, AI-powered composition and reply drafting with shield-scanned outbound content. | |
| Calendar | Google Calendar and Outlook event fetching, daily briefing generation, conflict detection and scheduling logic. |
| Reminders | Time-based, recurring (cron), and event-triggered. Snooze/dismiss. Project-scoped. Native OS notifications. SQLite persistence. |
| Approvals | Multi-level workflows (Low / Medium / High / Critical). Submit, approve, reject with severity tracking. |
| Documents | Generate CSV, DOCX, XLSX, HTML, Markdown, PDF, and PPTX from templates or AI. |
| Smart Home | Philips Hue control — lighting scenes, routines, individual light states. |
| Plugins | AssistantPlugin trait for community extensibility. |
Security & Privacy
Security is the foundation, not a feature bolted on. Every outgoing message is scanned. Every command is validated.
HiveShield — 4 Layers of Protection
| Layer | What It Does |
|---|---|
| PII Detection | 11+ types (email, phone, SSN, credit card, IP, name, address, DOB, passport, driver's license, bank account). Cloaking modes: Placeholder, Hash, Redact. |
| Secrets Scanning | API keys, tokens, passwords, private keys. Risk levels: Critical, High, Medium, Low. |
| Vulnerability Assessment | Prompt injection detection, jailbreak attempts, unsafe code patterns, threat scoring. |
| Access Control | Policy-based data classification. Provider trust levels: Local, Trusted, Standard, Untrusted. |
SecurityGateway
Hive routes command execution paths through SecurityGateway checks and blocks destructive filesystem ops, credential theft, privilege escalation, and common exfiltration patterns.
Local-First
All data in ~/.hive/ — config, conversations, learning data, collective memory, kanban boards. Encrypted key storage (AES-256-GCM + Argon2id key derivation). No telemetry. No analytics. No cloud dependency. Cloud providers used only for AI inference when you choose cloud models — and even then, HiveShield scans every request.
Self-Improvement Engine
Hive gets smarter every time you use it. Entirely local. No data leaves your machine.
| System | Function |
|---|---|
| Outcome Tracker | Quality scores per model and task type. Edit distance and follow-up penalties. |
| Routing Learner | EMA analysis adjusts model tier selection. Wired into ModelRouter via TierAdjuster. |
| Preference Model | Bayesian confidence tracking. Learns tone, detail level, formatting from observation. |
| Prompt Evolver | Versioned prompts per persona. Quality-gated refinements with rollback support. |
| Pattern Library | Extracts code patterns from accepted responses (6 languages: Rust, Python, JS/TS, Go, Java/Kotlin, C/C++). |
| Self-Evaluator | Comprehensive report every 200 interactions. Trend analysis, misroute rate, cost-per-quality-point. |
All learning data stored locally in SQLite (~/.hive/learning.db). Every preference is transparent, reviewable, and deletable.
Automation & Skills
| Feature | Details |
|---|---|
| Automation Workflows | Multi-step workflows with triggers (manual, cron schedule, event, webhook) and 6 action types. YAML-based definitions. Visual drag-and-drop workflow builder in the UI. |
| Skills Marketplace | Browse, install, remove, and toggle skills from 5 sources (ClawdHub, Anthropic, OpenAI, Google, Community). Create custom skills. Add remote skill sources. 34+ built-in skills. Security scanning on install. |
| Autonomous Skill Creation | When Hive encounters an unfamiliar domain, it searches existing skill sources first, then researches documentation and authors a new skill if nothing sufficient exists. |
| Personas | Named agent personalities with custom system prompts, prompt overrides per task type, and configurable model preferences. |
| Auto-Commit | Watches for staged changes and generates AI-powered commit messages. |
| Daily Standups | Automated agent activity summaries across all teams and workflows. |
| Voice Assistant | Wake-word detection, natural-language voice commands, intent recognition, and state-aware responses. |
Terminal & Execution
| Feature | Details |
|---|---|
| Shell Execution | Run commands with configurable timeout, async streaming output capture, working directory management, and exit code tracking. |
| Docker Sandbox | Full container lifecycle: create, start, stop, exec, pause, unpause, remove. Real Docker CLI integration with simulation fallback. |
| Browser Automation | Chrome DevTools Protocol over WebSocket: navigation, screenshots, JavaScript evaluation, DOM manipulation. |
| CLI Service | Built-in commands (/doctor, /clear, etc.) and system health checks. |
| Local AI Detection | Auto-discovers Ollama, LM Studio, and llama.cpp running on localhost. |
P2P Federation
Hive instances can discover and communicate with each other over the network, enabling distributed swarm execution and shared learning.
| Feature | Details |
|---|---|
| Peer Discovery | UDP broadcast for automatic LAN discovery, plus manual bootstrap peers. |
| WebSocket Transport | Bidirectional P2P connections with split-sink/stream architecture. |
| Typed Protocol | 12 built-in message kinds (Hello, Welcome, Heartbeat, TaskRequest, TaskResult, AgentRelay, ChannelSync, FleetLearn, StateSync, etc.) plus extensible custom types. |
| Channel Sync | Synchronize agent channel messages across federated instances. |
| Fleet Learning | Share learning outcomes across a distributed fleet of nodes. |
| Peer Registry | Persistent tracking of known peers with connection state management. |
Integrations
All integrations make real API calls — no stubs or simulated backends.
| Platform | Services |
|---|---|
| Gmail (REST API), Calendar, Contacts, Drive, Docs, Sheets, Tasks | |
| Microsoft | Outlook Email (Graph v1.0), Outlook Calendar |
| Messaging | Slack (Web API), Discord, Teams, Telegram, Matrix, WebChat |
| Cloud | GitHub (REST API), Cloudflare, Vercel, Supabase |
| Smart Home | Philips Hue |
| Voice | ClawdTalk (voice-over-phone via Telnyx) |
| Protocol | MCP client + server, OAuth2 (PKCE), Webhooks, P2P federation |
Blockchain / Web3
| Chain | Features |
|---|---|
| EVM (Ethereum, Polygon, Arbitrum, BSC, Avalanche, Optimism, Base) | Wallet management, real JSON-RPC, per-chain RPC configuration, ERC-20 token deployment with cost estimation. |
| Solana | Wallet management, real JSON-RPC, SPL token deployment with rent cost estimation. |
| Security | Encrypted private key storage (AES-256-GCM), no keys ever sent to AI providers. |
Persistence & Data Storage
All state persists between sessions. Nothing is lost on restart.
| Data | Storage | Location |
|---|---|---|
| Conversations | SQLite + JSON files | ~/.hive/memory.db + conversations/ |
| Messages | SQLite | ~/.hive/memory.db |
| Conversation search | SQLite FTS5 | ~/.hive/memory.db (Porter stemming) |
| Cost records | SQLite | ~/.hive/memory.db |
| Application logs | SQLite | ~/.hive/memory.db |
| Collective memory | SQLite (WAL mode) | ~/.hive/memory.db |
| Learning data | SQLite | ~/.hive/learning.db |
| Kanban boards | JSON | ~/.hive/kanban.json |
| Config & API keys | JSON + encrypted vault | ~/.hive/config.json |
| Session state | JSON | ~/.hive/session.json |
| Knowledge cache | HTML/text files | ~/.hive/knowledge/ |
| Workflows | YAML definitions | ~/.hive/workflows/ |
| Installed skills | Skills Marketplace | ~/.hive/skills/ |
On startup, Hive automatically backfills JSON-only conversations into SQLite and builds FTS5 search indexes. Path traversal protection on all file operations. SQLite databases use WAL mode with NORMAL synchronous and foreign key enforcement.
Architecture — 16-Crate Workspace
hive/crates/
├── hive_app Binary entry point — window, tray, build.rs
├── hive_ui Workspace shell, chat service, learning bridge
├── hive_ui_core Theme, actions, globals, sidebar, welcome screen
├── hive_ui_panels All panel implementations (20+ panels)
├── hive_core Config, SecurityGateway, persistence, Kanban, channels
├── hive_ai 11 AI providers, model router, complexity classifier, RAG
├── hive_agents Queen, HiveMind, Coordinator, collective memory, MCP, skills
├── hive_shield PII detection, secrets scanning, vulnerability assessment
├── hive_learn Outcome tracking, routing learner, preference model
├── hive_assistant Email, calendar, reminders, approval workflows
├── hive_fs File operations, git integration, file watchers, search
├── hive_terminal Command execution, Docker sandbox, browser automation
├── hive_docs Document generation — CSV, DOCX, XLSX, HTML, MD, PDF, PPTX
├── hive_blockchain EVM + Solana wallets, RPC config, token deployment
├── hive_integrations Google, Microsoft, GitHub, messaging, OAuth2, smart home
└── hive_network P2P federation, WebSocket transport, UDP discoveryDependency Flow
hive_app
└── hive_ui
├── hive_ui_core
├── hive_ui_panels
├── hive_ai ──────── hive_core
├── hive_agents ──── hive_ai, hive_learn, hive_core
├── hive_shield
├── hive_learn ───── hive_core
├── hive_assistant ─ hive_core, hive_ai
├── hive_fs
├── hive_terminal
├── hive_docs
├── hive_blockchain
├── hive_integrations
└── hive_networkUI — 20+ Panels
All panels are wired to live backend data. No mock data in the production path.
| Panel | Description |
|---|---|
| Chat | Main AI conversation with streaming responses |
| History | Conversation history browser |
| Files | Project file browser with create/delete/navigate |
| Specs | Specification management |
| Agents | Multi-agent swarm orchestration |
| Workflows | Visual workflow builder (drag-and-drop nodes) |
| Channels | Agent messaging channels (Telegram/Slack-style) |
| Kanban | Persistent task board with drag-and-drop |
| Monitor | Real-time system monitoring (CPU, RAM, disk, provider status) |
| Logs | Application logs viewer with level filtering |
| Costs | AI cost tracking and budget with CSV export |
| Git Ops | Full git workflow: staging, commits, push, PRs, branches, gitflow, LFS |
| Skills | Skill marketplace: browse, install, remove, toggle, create (5 sources) |
| Routing | Model routing configuration |
| Models | Model registry browser |
| Learning | Self-improvement dashboard with metrics, preferences, insights |
| Shield | Security scanning status |
| Assistant | Personal assistant: email, calendar, reminders |
| Token Launch | Token deployment wizard with chain selection |
| Settings | Application configuration with persist-on-save |
| Help | Documentation and guides |
Installation
Option 1: Download Pre-Built Binary (Recommended)
Grab the latest release for your platform from GitHub Releases.
| Platform | Download | Requirements |
|---|---|---|
| Windows (x64) | hive-windows-x64.zip | Windows 10/11, GPU with DirectX 12 |
| macOS (Apple Silicon) | hive-macos-arm64.tar.gz | macOS 12+, Metal-capable GPU |
| Linux (x64) | hive-linux-x64.tar.gz | Vulkan-capable GPU + drivers |
Windows: Extract the zip, run hive.exe. No installer needed.
macOS: Extract, then chmod +x hive && ./hive (or move to /usr/local/bin/).
Linux: Extract, then chmod +x hive && ./hive. Vulkan drivers required.
# Ubuntu/Debian
sudo apt install mesa-vulkan-drivers vulkan-tools
# Fedora
sudo dnf install mesa-vulkan-drivers vulkan-tools
# Arch
sudo pacman -S vulkan-icd-loader vulkan-toolsOption 2: Build from Source
Requires Rust toolchain and platform-specific build dependencies (see README for full details).
git clone https://github.com/PatSul/Hive.git
cd Hive/hive
cargo build --release
cargo run --releaseRun Tests
cd hive
cargo test --workspaceConfiguration
On first launch, Hive creates ~/.hive/config.json. Add your API keys to enable cloud providers:
{
"anthropic_api_key": "sk-ant-...",
"openai_api_key": "sk-...",
"google_api_key": "AIza...",
"ollama_url": "http://localhost:11434",
"lmstudio_url": "http://localhost:1234"
}All keys are stored locally and never transmitted except to their respective providers. HiveShield scans every outbound request before it leaves your machine. Configure provider preferences, model routing rules, budget limits, and security policies through the Settings panel in the UI.