HiveHiveDocs
HomeDownload

Documentation

OverviewThe Three PillarsMulti-Agent SystemAI ProvidersVector Memory & RAGSkill AcquisitionPersonal AssistantSecurity & PrivacySelf-ImprovementAutomation & SkillsTerminal & ExecutionRemote ControlTerminal ClientHive CloudP2P FederationIntegrationsBlockchain / Web3Data StorageArchitectureUI PanelsInstallationConfiguration

Hive Documentation

Complete reference for Hive — the native Rust desktop AI platform.

Hive is a native Rust desktop AI platform built on GPUI — no Electron, no web wrappers. It unifies a development environment, a personal assistant framework, and a security-first architecture into a single application. Instead of one chatbot, Hive runs a multi-agent swarm that can plan, build, test, and orchestrate workflows while learning your preferences over time — all while ensuring no secret or PII ever leaves your machine without approval.

What makes Hive different: it learns from every interaction (locally, privately), it remembers context across conversations via LanceDB vector memory, it detects its own knowledge gaps and autonomously researches and acquires new skills, and it federates across instances for distributed swarm execution.


The Three Pillars

Development Excellence

  • •Multi-agent swarm (Queen + teams)
  • •15 AI providers with dynamic model fetching
  • •LanceDB vector memory (chunks + durable memories)
  • •Embedding providers (OpenAI + Ollama)
  • •Background code indexing with change detection
  • •Git worktree isolation per team
  • •Full Git Ops (commits, PRs, branches, gitflow, LFS)
  • •Context engine (TF-IDF scoring + RAG + vector search)
  • •Cost tracking & budget enforcement
  • •Code review & testing automation
  • •Skills Marketplace (15 built-in + user-created skills)
  • •Universal Skills — cross-model TOML skill sharing
  • •Autonomous skill acquisition
  • •Automation workflows (cron, event, webhook triggers)
  • •Docker sandbox with real CLI integration
  • •MCP client + server with UI automation
  • •Remote control (WebSocket relay, QR pairing, web UI)
  • •Terminal CLI client (chat, sync, config, models)
  • •P2P federation across instances

Assistant Excellence

  • •Email triage & AI-powered drafting
  • •Calendar integration & daily briefings
  • •Reminders (time, recurring cron, event-triggered)
  • •Approval workflows with audit trails
  • •Document generation (7 formats)
  • •Smart home control
  • •Voice assistant (wake word + intent)

Safety Excellence

  • •PII detection (11+ types)
  • •Secrets scanning with risk levels
  • •Vulnerability assessment
  • •SecurityGateway command filtering
  • •Memory flush — extract insights before context compaction
  • •Encrypted storage (AES-256-GCM)
  • •Provider trust-based access control
  • •Local-first — no telemetry

Multi-Agent System

Hive does not use a single AI agent. It uses a hierarchical swarm modeled on a beehive:

                    +-------------+
                    |    QUEEN    |   Meta-coordinator
                    |  (Planning) |   Goal decomposition
                    +------+------+   Budget enforcement
                           |          Cross-team synthesis
              +------------+------------+
              |            |            |
        +-----v----+ +----v-----+ +----v-----+
        |  TEAM 1  | |  TEAM 2  | |  TEAM 3  |
        | HiveMind | |Coordinator| |SingleShot|
        +----+-----+ +----+-----+ +----------+
             |             |
       +-----+-----+  +---+---+
       |     |     |  |       |
      Arch  Code  Rev Inv    Impl

Queen

Decomposes high-level goals into team objectives with dependency ordering, dispatches teams with the appropriate orchestration mode, enforces budget and time limits, shares cross-team insights, synthesizes results, and records learnings to collective memory.

HiveMind Teams

Use specialized agents — Architect, Coder, Reviewer, Tester, Security — that reach consensus through structured debate.

Coordinator Teams

Decompose work into dependency-ordered tasks (investigate, implement, verify) with persona-specific prompts.

Every team gets its own git worktree (swarm/{run_id}/{team_id}) for conflict-free parallel execution, merging back on completion.


AI Providers

15 providers with dynamic model fetching, automatic complexity-based routing, and fallback:

Cloud

  • Anthropic (Claude Opus 4.6/4.5/4.1/4, Sonnet 4.5/4)
  • OpenAI (GPT-4 Turbo, GPT-4o, GPT-4 Mini)
  • Google (Gemini 3.1 Pro/Flash + Search Grounding & Code Execution)
  • OpenRouter (100+ models)
  • Groq (fast inference)
  • xAI (Grok)
  • Venice AI
  • Mistral
  • Doubao
  • HuggingFace
  • HiveGateway (cloud AI proxy)

Local

  • Ollama
  • LM Studio
  • Generic OpenAI-compatible
  • LiteLLM proxy

Features: dynamic model registry fetching from provider APIs, complexity classification, 14-entry fallback chain, per-model cost tracking, streaming support, budget enforcement.

Dynamic Model Fetching

All providers now fetch their available models dynamically from provider APIs — no hardcoded model lists. Google Gemini models are automatically injected with Search Grounding and Code Execution tools. Anthropic requests include beta agentic headers for extended capabilities.

Streaming

All AI responses stream token-by-token through the UI. Streaming is implemented end-to-end: SSE parsing at the provider layer, async channel transport, and incremental UI rendering. Shell output streams in real time through async mpsc channels. WebSocket-based P2P transport supports bidirectional streaming between federated instances.


Vector Memory & RAG

Hive maintains durable, searchable memory across conversations using LanceDB — a columnar vector database that runs embedded (no external server).

User query
    |
    v
EmbeddingProvider ─── OpenAI text-embedding-3-small (1536d)
    |                  or Ollama nomic-embed-text (768d)
    |                  with automatic fallback
    v
┌─────────────────────────────────────────────┐
│              LanceDB MemoryStore             │
│                                              │
│  chunks table          memories table        │
│  ┌──────────────┐      ┌──────────────────┐  │
│  │ source_file  │      │ content          │  │
│  │ content      │      │ category         │  │
│  │ embedding    │      │ importance       │  │
│  │ start_line   │      │ embedding        │  │
│  │ end_line     │      │ conversation_id  │  │
│  └──────────────┘      │ decay_exempt     │  │
│                        └──────────────────┘  │
└─────────────────────────────────────────────┘
    |                          |
    v                          v
Code search results       Recalled memories
    |                          |
    v                          v
# Retrieved Context       # Recalled Memories
(injected as system msg)  (dedicated system msg)

How It Works

ComponentPurpose
EmbeddingProviderAsync trait with OpenAI, Ollama, and Mock implementations. Auto-detects available provider at startup.
MemoryStoreLanceDB wrapper managing two tables — chunks (indexed code) and memories (durable insights). Vector similarity search via nearest-neighbor queries.
HiveMemoryUnified API combining MemoryStore + embeddings. 50-line chunks with 10-line overlap for code indexing. Combined query returns both code chunks and recalled memories.
BackgroundIndexerWalks project directories recursively, skips binary/hidden/vendor dirs, tracks file hashes for incremental re-indexing. Triggered on workspace open.
MemoryExtractorPre-compaction flush: builds an LLM prompt to extract key insights from a conversation before context window is compacted. Parses JSON response, filters by importance threshold.

Memory Categories

Memories are categorized for relevance scoring:

  • UserPreference — Coding style, tooling choices, formatting preferences
  • CodePattern — Recurring patterns and architectural decisions
  • TaskProgress — What was done, what's pending
  • Decision — Technical decisions and their rationale
  • General — Catch-all for other durable insights

Injection Flow

On every message send: (1) Code chunks from the indexed workspace are injected into # Retrieved Context (curated by ContextEngine). (2) Recalled memories from previous conversations are injected as a separate # Recalled Memories system message with importance scores and categories. (3) Both are available to the AI alongside the conversation history.


Autonomous Skill Acquisition

Hive doesn't just execute what it already knows — it recognizes what it doesn't know and teaches itself. This is the closed-loop system that lets Hive grow its own capabilities in real time:

User request
    |
    v
Competence Detection ─── "I know this" ───> Normal execution
    |
    "I don't know this"
    |
    v
Search Skills / Sources ─── Found sufficient skill? ───> Install & use
    |
    Not found (or insufficient)
    |
    v
Knowledge Acquisition ───> Fetch docs, parse, synthesize
    |
    v
Skill Authoring Pipeline ───> Generate, security-scan, test, install
    |
    v
New skill available for future requests

Competence Detection

The CompetenceDetector scores confidence on every incoming request using a weighted formula across four signals: skill match (30%), pattern match (20%), memory match (15%), and AI assessment (35%). When confidence drops below the learning threshold (default 0.4), the system identifies competence gaps and triggers the acquisition pipeline automatically.

Knowledge Acquisition

The KnowledgeAcquisitionAgent autonomously identifies documentation URLs, fetches pages via HTTPS with domain allowlisting and private-IP blocking, parses HTML to clean text, caches locally with SHA-256 hashing and 7-day TTL, synthesizes knowledge via AI, and injects results into the ContextEngine. Security: HTTPS-only, 23+ allowlisted documentation domains, private IP rejection, content scanned for injection before storage.

Skill Authoring Pipeline

When no existing skill is found: 1) Search existing skills first — each candidate is AI-scored for sufficiency (0-10). Skills scoring >= 7 are installed directly. 2) Research via KnowledgeAcquisitionAgent. 3) AI generates a skill definition. 4) Security scan (same 6-category injection scan as community skills). 5) Test validation. 6) Install with /hive- trigger prefix, disabled by default.

File-Based User Skills

Skills are stored as TOML files with capability tags in ~/.hive/skills/. The SkillsRegistry provides CRUD operations (create, update, delete, toggle) with integrity hashing on every write. Skills are dispatched via /command alongside built-in skills.

[skill]
name = "my-review-skill"
description = "Reviews code for common bugs"
category = "CodeQuality"
enabled = true

[requirements]
capabilities = ["tool_use"]

[prompt]
template = "You are a code review assistant. Check for null pointer dereferences, off-by-one errors, and resource leaks."

[tools]
required = ["read_file"]

Personal Assistant

The assistant uses the same AI infrastructure as the development platform — same model routing, same security scanning, same learning loop.

CapabilityDetails
EmailGmail and Outlook inbox polling via real REST APIs. Email digest generation, AI-powered composition and reply drafting with shield-scanned outbound content.
CalendarGoogle Calendar and Outlook event fetching, daily briefing generation, conflict detection and scheduling logic.
RemindersTime-based, recurring (cron), and event-triggered. Snooze/dismiss. Project-scoped. Native OS notifications. SQLite persistence.
ApprovalsMulti-level workflows (Low / Medium / High / Critical). Submit, approve, reject with severity tracking.
DocumentsGenerate CSV, DOCX, XLSX, HTML, Markdown, PDF, and PPTX from templates or AI.
Smart HomePhilips Hue control — lighting scenes, routines, individual light states.
PluginsAssistantPlugin trait for community extensibility.
SchedulerBackground task scheduler with 60-second tick driver. Automated reminders, recurring tasks, and cron-based scheduling with native OS thread.

Security & Privacy

Security is the foundation, not a feature bolted on. Every outgoing message is scanned. Every command is validated.

HiveShield — 4 Layers of Protection

LayerWhat It Does
PII Detection11+ types (email, phone, SSN, credit card, IP, name, address, DOB, passport, driver's license, bank account). Cloaking modes: Placeholder, Hash, Redact.
Secrets ScanningAPI keys, tokens, passwords, private keys. Risk levels: Critical, High, Medium, Low.
Vulnerability AssessmentPrompt injection detection, jailbreak attempts, unsafe code patterns, threat scoring.
Access ControlPolicy-based data classification. Provider trust levels: Local, Trusted, Standard, Untrusted.

SecurityGateway

The SecurityGateway validates every command execution path against 14 dangerous command patterns, 3 SQL injection patterns, and risky constructs (eval, command substitution, backticks). It enforces HTTPS for all URLs, blocks private IP ranges, maintains a domain allowlist (GitHub, npm, crates.io, etc.), and prevents access to sensitive paths (.ssh, .aws, .gnupg, /etc/shadow, /etc/passwd).

Guardian

AI output safety scanning — real-time content analysis across 6 categories (PII leakage, prompt injection, harmful content, bias detection, hallucination risk, and compliance violations) with configurable thresholds and blocking modes.

Local-First

All data in ~/.hive/ — config, conversations, learning data, collective memory, kanban boards, vector memory. Encrypted key storage (AES-256-GCM + Argon2id key derivation). No telemetry. No analytics. No cloud dependency. Cloud providers used only for AI inference when you choose cloud models — and even then, HiveShield scans every request.


Self-Improvement Engine

Hive gets smarter every time you use it. Entirely local. No data leaves your machine.

SystemFunction
Outcome TrackerQuality scores per model and task type. Edit distance and follow-up penalties.
Routing LearnerEMA analysis adjusts model tier selection. Wired into ModelRouter via TierAdjuster.
Preference ModelBayesian confidence tracking. Learns tone, detail level, formatting from observation.
Prompt EvolverVersioned prompts per persona. Quality-gated refinements with rollback support.
Pattern LibraryExtracts code patterns from accepted responses (6 languages: Rust, Python, JS/TS, Go, Java/Kotlin, C/C++).
Self-EvaluatorComprehensive report every 200 interactions. Trend analysis, misroute rate, cost-per-quality-point.

All learning data stored locally in SQLite (~/.hive/learning.db). Every preference is transparent, reviewable, and deletable.


Automation & Skills

FeatureDetails
Automation WorkflowsMulti-step workflows with triggers (manual, cron schedule, event, webhook) and 6 action types. YAML-based definitions. Visual drag-and-drop workflow builder in the UI.
Skills MarketplaceBrowse, install, remove, and toggle skills from 5 sources (ClawdHub, Anthropic, OpenAI, Google, Community). Create custom skills. Add remote skill sources. 15 built-in skills. Security scanning on install.
User SkillsUniversal Skills — cross-model TOML skill definitions in ~/.hive/skills/. Capability tags adapt execution per model. Full CRUD with SHA-256 integrity. Dispatched via /command in chat.
Autonomous Skill CreationWhen Hive encounters an unfamiliar domain, it searches existing skill sources first, then researches documentation and authors a new skill if nothing sufficient exists.
PersonasNamed agent personalities with custom system prompts, prompt overrides per task type, and configurable model preferences.
Auto-CommitWatches for staged changes and generates AI-powered commit messages.
Daily StandupsAutomated agent activity summaries across all teams and workflows.
Voice AssistantWake-word detection, natural-language voice commands, intent recognition, TTS auto-speak for AI responses, and state-aware responses.

Terminal & Execution

FeatureDetails
Shell ExecutionRun commands with configurable timeout, async streaming output capture, working directory management, and exit code tracking.
Docker SandboxFull container lifecycle: create, start, stop, exec, pause, unpause, remove. Real Docker CLI integration with simulation fallback.
Browser AutomationChrome DevTools Protocol over WebSocket: navigation, screenshots, JavaScript evaluation, DOM manipulation.
MCP ServerFull Model Context Protocol integration with enhanced tool use. Enables structured tool calling across all major providers.
UI AutomationCross-platform screen interaction via enigo — mouse, keyboard, and screen control for automated workflows.
CLI ServiceBuilt-in commands (/doctor, /clear, etc.) and system health checks.
Local AI DetectionAuto-discovers Ollama, LM Studio, and llama.cpp running on localhost.

Remote Control

Hive can be controlled from any device on your network — phone, tablet, or another computer — via a built-in web server with real-time streaming.

FeatureDetails
HiveDaemonBackground service holding canonical session state (active panel, conversation, agent runs). Broadcasts events to all connected clients.
Web ServerAxum-based HTTP server with embedded static assets. REST API endpoints: /api/state, /api/chat, /api/panels, /api/agents.
WebSocket StreamingReal-time bidirectional communication for chat streaming, agent status, and panel updates.
QR PairingSecure device pairing via QR code using X25519 elliptic-curve key exchange + AES-GCM encryption. No passwords needed.
Session JournalJSONL-based persistence of daemon events for crash recovery and audit.
Event TypesSendMessage, SwitchPanel, StartAgentTask, CancelAgentTask, StreamChunk, StreamComplete, AgentStatus, StateSnapshot, PanelData.

Terminal Client (CLI)

A full-featured terminal client for interacting with Hive without the GUI:

hive chat                    # Interactive chat TUI with streaming
hive chat --model claude-3   # Chat with a specific model
hive models                  # List available AI models
hive status                  # Show account tier, usage, sync status
hive sync push               # Push data to cloud
hive sync pull               # Pull data from cloud
hive config                  # View all config
hive config key value        # Set a config value
hive login                   # Authenticate with Hive Cloud
hive remote                  # Show remote connection status

Built with Ratatui for a polished terminal UI with keyboard navigation, scrolling, and real-time streaming display.


Hive Cloud

Optional cloud services for users who want cross-device sync, cloud AI routing, and team features:

FeatureDetails
HiveGatewayCloud AI proxy — routes requests through Hive's infrastructure for users without their own API keys. Configurable in Settings.
Cloud SyncPush/pull config, conversations, and learning data across devices via encrypted blob storage.
Admin TUITerminal dashboard (hive-admin) with 6 tabs: Dashboard, Users, Gateway, Relay, Sync, Teams. Real-time refresh.
RelayEvent relay service for remote control across networks (not just LAN).

Cloud features are entirely optional. Hive works fully offline with local models.


P2P Federation

Hive instances can discover and communicate with each other over the network, enabling distributed swarm execution and shared learning.

FeatureDetails
Peer DiscoveryUDP broadcast for automatic LAN discovery, plus manual bootstrap peers.
WebSocket TransportBidirectional P2P connections with split-sink/stream architecture.
Typed Protocol12 built-in message kinds (Hello, Welcome, Heartbeat, TaskRequest, TaskResult, AgentRelay, ChannelSync, FleetLearn, StateSync, etc.) plus extensible custom types.
Channel SyncSynchronize agent channel messages across federated instances.
Fleet LearningShare learning outcomes across a distributed fleet of nodes.
Peer RegistryPersistent tracking of known peers with connection state management.

Integrations

All integrations make real API calls — no stubs or simulated backends.

PlatformServices
GoogleGmail (REST API), Calendar, Contacts, Drive, Docs, Sheets, Tasks
MicrosoftOutlook Email (Graph v1.0), Outlook Calendar
MessagingSlack (Web API), Discord, Teams, Telegram, Matrix, WebChat, WhatsApp (Business Cloud API), Signal, Google Chat, iMessage (macOS)
Git HostingGitHub (REST API), GitLab (REST API), Bitbucket (REST API v2.0)
Cloud PlatformsAWS (EC2, S3, Lambda, CloudWatch), Azure (VMs, Blob, Functions, Monitor), GCP (Compute, Storage, Functions, Logging)
Cloud ServicesCloudflare, Vercel, Supabase
DatabasesPostgreSQL, MySQL, SQLite — query execution, schema introspection, connection pooling
DevOpsDocker (full container lifecycle), Kubernetes (pods, deployments, services, logs)
KnowledgeNotion (pages, databases, blocks), Obsidian (vault management, frontmatter)
Project ManagementJira (issues, projects, transitions), Linear (issues, teams, cycles), Asana (tasks, projects, sections)
Smart HomePhilips Hue
VoiceClawdTalk (voice-over-phone via Telnyx)
ProtocolMCP client + server, OAuth2 (PKCE), Webhooks, P2P federation

Blockchain / Web3

ChainFeatures
EVM (Ethereum, Polygon, Arbitrum, BSC, Avalanche, Optimism, Base)Wallet management, real JSON-RPC, per-chain RPC configuration, ERC-20 token deployment with cost estimation.
SolanaWallet management, real JSON-RPC, SPL token deployment with rent cost estimation.
SecurityEncrypted private key storage (AES-256-GCM), no keys ever sent to AI providers.

Persistence & Data Storage

All state persists between sessions. Nothing is lost on restart.

DataStorageLocation
ConversationsSQLite + JSON files~/.hive/memory.db + conversations/
MessagesSQLite~/.hive/memory.db
Conversation searchSQLite FTS5~/.hive/memory.db (Porter stemming)
Cost recordsSQLite~/.hive/memory.db
Application logsSQLite~/.hive/memory.db
Collective memorySQLite (WAL mode)~/.hive/memory.db
Learning dataSQLite~/.hive/learning.db
Kanban boardsJSON~/.hive/kanban.json
Config & API keysJSON + encrypted vault~/.hive/config.json
Vector memoryLanceDB~/.hive/memory.lance
Session stateJSON~/.hive/session.json
Session journalJSONL~/.hive/session_journal.jsonl
Knowledge cacheHTML/text files~/.hive/knowledge/
WorkflowsYAML definitions~/.hive/workflows/
SkillsTOML with capability tags~/.hive/skills/{name}.toml

On startup, Hive automatically backfills JSON-only conversations into SQLite and builds FTS5 search indexes. Path traversal protection on all file operations. SQLite databases use WAL mode with NORMAL synchronous and foreign key enforcement.


Architecture — 19-Crate Workspace

hive/crates/
├── hive_app           Binary entry point — window, tray, build.rs (winres)
├── hive_ui            Workspace shell, chat service, learning bridge, title/status bars
├── hive_ui_core       Theme, actions, globals, sidebar, welcome screen
├── hive_ui_panels     All panel implementations (23 panels)
├── hive_core          Config, SecurityGateway, persistence (SQLite), Kanban, channels, scheduling
├── hive_ai            15 AI providers, capability-aware router, complexity classifier, context engine,
│                      RAG, embeddings (OpenAI + Ollama), LanceDB memory, background indexer
├── hive_agents        Queen, HiveMind, Coordinator, collective memory, MCP (19 tools),
│                      Universal Skills (SkillLoader + SkillExecutor, TOML persistence),
│                      competence detection, skill authoring
├── hive_shield        PII detection, secrets scanning, vulnerability assessment, access control
├── hive_learn         Outcome tracking, routing learner, preference model, prompt evolution
├── hive_assistant     Email, calendar, reminders, approval workflows, daily briefings
├── hive_fs            File operations, git integration, file watchers, search
├── hive_terminal      Command execution, Docker sandbox, browser automation, local AI detection
├── hive_docs          Document generation — CSV, DOCX, XLSX, HTML, Markdown, PDF, PPTX
├── hive_blockchain    EVM + Solana wallets, RPC config, token deployment with real JSON-RPC
├── hive_integrations  Google, Microsoft, GitHub, GitLab, Bitbucket, messaging, databases,
│                      cloud platforms, DevOps, knowledge bases, project management, OAuth2
├── hive_network       P2P federation, WebSocket transport, UDP discovery, peer registry, sync
├── hive_remote        Background daemon, WebSocket relay, QR pairing, web UI, REST API
├── hive_admin         Cloud admin TUI dashboard (6 tabs) [binary: hive-admin]
└── hive_cli           Terminal AI client (chat, sync, config, models) [binary: hive]

Dependency Flow

hive_app
  └── hive_ui
        ├── hive_ui_core
        ├── hive_ui_panels
        ├── hive_ai ──────── hive_core
        ├── hive_agents ──── hive_ai, hive_learn, hive_core
        ├── hive_shield
        ├── hive_learn ───── hive_core
        ├── hive_assistant ─ hive_core, hive_ai
        ├── hive_fs
        ├── hive_terminal
        ├── hive_docs
        ├── hive_blockchain
        ├── hive_integrations
        ├── hive_network
        └── hive_remote ──── hive_core, hive_ai, hive_agents, hive_network

hive_admin (standalone binary)
hive_cli   (standalone binary) ── hive_core

UI — 23 Panels

All panels are wired to live backend data. No mock data in the production path. 8 built-in themes with community voting and custom theme support.

PanelDescription
ChatMain AI conversation with streaming responses
HistoryConversation history browser with full-text search
FilesProject file browser with create/delete/navigate
SpecsSpecification management
AgentsMulti-agent swarm orchestration with task tree drill-down
WorkflowsVisual workflow builder (drag-and-drop nodes)
ChannelsAgent messaging channels (Telegram/Slack-style)
KanbanPersistent task board with drag-and-drop
MonitorReal-time system monitoring (CPU, RAM, disk, provider status)
LogsApplication logs viewer with level filtering
CostsAI cost tracking with historical graphs and CSV export
Git OpsFull git workflow: staging, commits, push, PRs, branches, gitflow, LFS
SkillsUniversal skill marketplace: browse, install, remove, toggle, create — cross-model TOML skills
RoutingModel routing configuration
ModelsUnified model discovery and selection across all providers
LearningSelf-improvement dashboard with metrics, preferences, insights
ShieldInteractive privacy controls and security scanning
AssistantPersonal assistant: email, calendar, reminders, approvals
ReviewCode review management
NetworkPeer collaboration and P2P connectivity
Token LaunchToken deployment wizard with chain selection
SettingsApplication configuration with persist-on-save, cloud account fields
HelpDocumentation and guides

Installation

Option 1: Download Pre-Built Binary (Recommended)

Grab the latest release for your platform from GitHub Releases.

PlatformDownloadRequirements
Windows (x64)hive-windows-x64.zipWindows 10/11, GPU with DirectX 12
macOS (Apple Silicon)hive-macos-arm64.tar.gzmacOS 12+, Metal-capable GPU
Linux (x64)hive-linux-x64.tar.gzVulkan-capable GPU + drivers

Windows: Extract the zip, run hive.exe. No installer needed.

macOS: Extract, then chmod +x hive && ./hive (or move to /usr/local/bin/).

Linux: Extract, then chmod +x hive && ./hive. Vulkan drivers required.

# Ubuntu/Debian
sudo apt install mesa-vulkan-drivers vulkan-tools

# Fedora
sudo dnf install mesa-vulkan-drivers vulkan-tools

# Arch
sudo pacman -S vulkan-icd-loader vulkan-tools

Option 2: Build from Source

Requires Rust toolchain, protoc (Protocol Buffers compiler), and platform-specific build dependencies (see README for full details).

# Install protoc (required for gRPC)
# macOS: brew install protobuf
# Ubuntu: sudo apt install protobuf-compiler
# Windows: choco install protoc

git clone https://github.com/PatSul/Hive.git
cd Hive/hive
cargo build --release
cargo run --release

Run Tests

cd hive
cargo test --workspace

Configuration

On first launch, Hive creates ~/.hive/config.json. Add your API keys to enable cloud providers:

{
  "anthropic_api_key": "sk-ant-...",
  "openai_api_key": "sk-...",
  "google_api_key": "AIza...",
  "ollama_url": "http://localhost:11434",
  "lmstudio_url": "http://localhost:1234"
}

All keys are stored locally and never transmitted except to their respective providers. HiveShield scans every outbound request before it leaves your machine. Configure provider preferences, model routing rules, budget limits, and security policies through the Settings panel in the UI.

Ready to get started?

Download Hive for free, or view the source on GitHub.

DownloadView on GitHub