@hung319/opencode-hive
v1.10.10
Published
OpenCode plugin for Agent Hive - from vibe coding to hive coding
Maintainers
Readme
@hung319/opencode-hive
From Vibe Coding to Hive Coding — The OpenCode plugin that brings structure to AI-assisted development with Smart Context Engine and Multi-Agent Orchestration.
Why Hive?
Stop losing context. Stop repeating decisions. Start shipping with confidence.
Vibe: "Just make it work"
Hive: Plan → Review → Approve → Execute → ShipWhat's New in v1.10
🚀 Smart Context Engine
- Pattern Learning — Learns from task execution, predicts next actions
- Auto-Summary — Extracts key changes from diffs automatically
- Context Insights — Suggests next steps based on learned patterns
🤖 Multi-Agent Orchestration
- Delegation Hints — Complexity scoring, agent recommendation
- Batch Dispatch — Start multiple tasks in parallel with
hive_worktree_batch - Auto Agent Selection — Recommends best agent based on task type
⚡ Performance
- Incremental Loading — LRU cache for skills and context
- Background Sync — Non-blocking periodic sync
- Lazy MCP Init — Fast plugin startup
Environment Variables
Required for Optional Features
| Variable | Description | Required |
|----------|-------------|----------|
| EXA_API_KEY | API key for Exa AI web search | Only if using websearch MCP |
| SEARXNG_URL | URL for self-hosted SearXNG instance | Only if using searxng MCP |
| CXXFLAGS="-std=c++20" | C++ compiler flags for native modules | Only on Node.js v24+ |
Setup Examples
# For web search (Exa AI) - get key at https://exa.ai
export EXA_API_KEY="your-exa-api-key"
# For privacy meta-search (self-hosted)
export SEARXNG_URL="https://your-searxng-instance.com"
# For Node.js v24+ native modules (ast-grep, agent-booster, memory)
export CXXFLAGS="-std=c++20"Add these to your ~/.bashrc or ~/.zshrc for persistence.
Quick Setup
Step 1: Check system
bunx @hung319/opencode-hive doctorStep 2: Install plugin
npm install @hung319/opencode-hiveStep 3: Quick install extras
npx @hung319/opencode-hive doctor --install
npx @hung319/opencode-hive doctor --install /path/to/projectFeatures Overview
MCP Servers (6 Search Options)
| MCP | Best For | API Key |
|-----|----------|---------|
| websearch | Current web info | Exa AI (free tier) |
| context7 | Library docs | Context7 (free tier) |
| grep_app | GitHub code patterns | None |
| ddg_search | DuckDuckGo search | None (free) |
| searxng | Privacy meta-search | Self-hostable |
Tools
| Category | Tools |
|----------|-------|
| Memory | hive_memory_*, hive_vector_* |
| Planning | hive_plan_*, hive_task_* |
| Execution | hive_worktree_*, hive_worktree_batch |
| Code | ast_grep_*, agent-booster, LSP tools |
Utilities
| Feature | Description | |---------|-------------| | PatternLearner | Learn from tasks, predict next actions | | AutoSummary | Extract key changes from diffs | | DelegationHints | Complexity + agent recommendation |
The Workflow
- Create Feature —
hive_feature_create("dark-mode") - Write Plan — AI generates structured plan
- Review — You review in VS Code, add comments
- Approve —
hive_plan_approve() - Execute — Tasks run in isolated git worktrees
- Ship — Clean commits, full audit trail
Planning-mode delegation
During planning, "don't execute" means "don't implement" (no code edits, no worktrees). Read-only exploration is explicitly allowed and encouraged, both via local tools and by delegating to Scout.
Canonical Delegation Threshold
- Delegate to Scout when you cannot name the file path upfront, expect to inspect 2+ files, or the question is open-ended ("how/where does X work?").
- Local
read/grep/globis acceptable only for a single known file and a bounded question.
Tools
Feature Management
| Tool | Description |
|------|-------------|
| hive_feature_create | Create a new feature |
| hive_feature_complete | Mark feature as complete |
Planning
| Tool | Description |
|------|-------------|
| hive_plan_write | Write plan.md |
| hive_plan_read | Read plan and comments |
| hive_plan_approve | Approve plan for execution |
Tasks
| Tool | Description |
|------|-------------|
| hive_tasks_sync | Generate tasks from plan |
| hive_task_create | Create manual task |
| hive_task_update | Update task status/summary |
Worktree
| Tool | Description |
|------|-------------|
| hive_worktree_start | Start normal work on task (creates worktree) |
| hive_worktree_create | Resume blocked task in existing worktree |
| hive_worktree_commit | Complete task (applies changes) |
| hive_worktree_discard | Abort task (discard changes) |
Troubleshooting
Repeated blocked-resume errors / loop
If you see repeated retries around continueFrom: "blocked", use this protocol:
- Call
hive_status()first. - If status is
pendingorin_progress, start normally with:hive_worktree_start({ feature, task })
- Only use blocked resume when status is exactly
blocked:hive_worktree_create({ task, continueFrom: "blocked", decision })
Do not retry the same blocked-resume call on non-blocked statuses; re-check hive_status() and use hive_worktree_start for normal starts.
Using with DCP plugin
When using Dynamic Context Pruning (DCP), use a Hive-safe config in ~/.config/opencode/dcp.jsonc:
manualMode.enabled: truemanualMode.automaticStrategies: falseturnProtection.enabled: truewithturnProtection.turns: 12tools.settings.nudgeEnabled: false- protect key tools in
tools.settings.protectedTools(at least:hive_status,hive_worktree_start,hive_worktree_create,hive_worktree_commit,hive_worktree_discard,question) - disable aggressive auto strategies:
strategies.deduplication.enabled: falsestrategies.supersedeWrites.enabled: falsestrategies.purgeErrors.enabled: false
For local plugin testing, keep OpenCode plugin entry as "@hung319/opencode-hive" (not "@hung319/opencode-hive@latest").
Prompt Budgeting & Observability
Hive automatically bounds worker prompt sizes to prevent context overflow and tool output truncation.
Budgeting Defaults
| Limit | Default | Description |
|-------|---------|-------------|
| maxTasks | 10 | Number of previous tasks included |
| maxSummaryChars | 2,000 | Max chars per task summary |
| maxContextChars | 20,000 | Max chars per context file |
| maxTotalContextChars | 60,000 | Total context budget |
When limits are exceeded, content is truncated with ...[truncated] markers and file path hints are provided so workers can read the full content.
Observability
hive_worktree_start and blocked-resume hive_worktree_create output include metadata fields:
promptMeta: Character counts for plan, context, previousTasks, spec, workerPromptpayloadMeta: JSON payload size, whether prompt is inlined or referenced by filebudgetApplied: Budget limits, tasks included/dropped, path hints for dropped contentwarnings: Array of threshold exceedances with severity levels (info/warning/critical)
Prompt Files
Large prompts are written to .hive/features/<feature>/tasks/<task>/worker-prompt.md and passed by file reference (workerPromptPath) rather than inlined in tool output. This prevents truncation of large prompts.
Plan Format
# Feature Name
## Overview
What we're building and why.
## Tasks
### 1. Task Name
Description of what to do.
### 2. Another Task
Description.Configuration
Hive uses a config file at ~/.config/opencode/agent_hive.json. You can customize agent models, variants, disable skills, and disable MCP servers.
Disable Skills or MCPs
{
"$schema": "https://raw.githubusercontent.com/hung319/agent-hive/main/packages/opencode-hive/schema/agent_hive.schema.json",
"disableSkills": ["brainstorming", "writing-plans"],
"disableMcps": ["websearch", "pare_search"]
}Available Skills
| ID | Description |
|----|-------------|
| brainstorming | Use before any creative work. Explores user intent, requirements, and design through collaborative dialogue before implementation. |
| writing-plans | Use when you have a spec or requirements for a multi-step task. Creates detailed implementation plans with bite-sized tasks. |
| executing-plans | Use when you have a written implementation plan. Executes tasks in batches with review checkpoints. |
| dispatching-parallel-agents | Use when facing 2+ independent tasks. Dispatches multiple agents to work concurrently on unrelated problems. |
| test-driven-development | Use when implementing any feature or bugfix. Enforces write-test-first, red-green-refactor cycle. |
| systematic-debugging | Use when encountering any bug or test failure. Requires root cause investigation before proposing fixes. |
| code-reviewer | Use when reviewing implementation changes against an approved plan or task to catch missing requirements, YAGNI, dead code, and risky patterns. |
| verification-before-completion | Use before claiming work is complete. Requires running verification commands and confirming output before success claims. |
Available MCPs
| ID | Description | Requirements |
|----|-------------|--------------|
| websearch | Web search via Exa AI. Real-time web searches and content scraping. | Set EXA_API_KEY env var |
| context7 | Library documentation lookup via Context7. Query up-to-date docs for any programming library. | None |
| grep_app | GitHub code search via grep.app. Find real-world code examples from public repositories. | None |
| pare_search | Structured ripgrep/fd search with 65-95% token reduction. | None (runs via npx) |
| ddg_search | DuckDuckGo search - free, no API key required. | None |
| searxng | Privacy meta-search via SearXNG. Requires self-hosted instance. | Set SEARXNG_URL env var |
Per-Agent Skills
Each agent can have specific skills enabled. If configured, only those skills appear in hive_skill():
{
"agents": {
"hive": {
"skills": ["brainstorming", "writing-plans", "executing-plans"]
},
"forager-worker": {
"skills": ["test-driven-development", "verification-before-completion"]
}
}
}How skills filtering works:
| Config | Result |
|--------|--------|
| skills omitted | All skills enabled (minus global disableSkills) |
| skills: [] | All skills enabled (minus global disableSkills) |
| skills: ["tdd", "debug"] | Only those skills enabled |
Note: Wildcards like ["*"] are not supported - use explicit skill names or omit the field entirely for all skills.
Auto-load Skills
Use autoLoadSkills to automatically inject skills into an agent's system prompt at session start.
{
"$schema": "https://raw.githubusercontent.com/hung319/agent-hive/main/packages/opencode-hive/schema/agent_hive.schema.json",
"agents": {
"hive": {
"autoLoadSkills": ["parallel-exploration"]
},
"forager-worker": {
"autoLoadSkills": ["test-driven-development", "verification-before-completion"]
}
}
}Supported skill sources:
autoLoadSkills accepts both Hive builtin skill IDs and file-based skill IDs. Resolution order:
- Hive builtin — Skills bundled with opencode-hive (always win if ID matches)
- Project OpenCode —
<project>/.opencode/skills/<id>/SKILL.md - Global OpenCode —
~/.config/opencode/skills/<id>/SKILL.md - Project Claude —
<project>/.claude/skills/<id>/SKILL.md - Global Claude —
~/.claude/skills/<id>/SKILL.md
Skill IDs must be safe directory names (no /, \, .., or .). Missing or invalid skills emit a warning and are skipped—startup continues without failure.
How skills and autoLoadSkills interact:
skillscontrols what appears inhive_skill()— the agent can manually load these on demandautoLoadSkillsinjects skills unconditionally at session start — no manual loading needed- These are independent: a skill can be auto-loaded but not appear in
hive_skill(), or vice versa - User
autoLoadSkillsare merged with defaults (use globaldisableSkillsto remove defaults)
Default auto-load skills by agent:
| Agent | autoLoadSkills default |
|-------|------------------------|
| hive | parallel-exploration |
| forager-worker | test-driven-development, verification-before-completion |
| scout-researcher | (none) |
| architect-planner | parallel-exploration |
| swarm-orchestrator | (none) |
Per-Agent Model Variants
You can set a variant for each Hive agent to control model reasoning/effort level. Variants are keys that map to model-specific option overrides defined in your opencode.json.
{
"$schema": "https://raw.githubusercontent.com/hung319/agent-hive/main/packages/opencode-hive/schema/agent_hive.schema.json",
"agents": {
"hive": {
"model": "anthropic/claude-sonnet-4-20250514",
"variant": "high"
},
"forager-worker": {
"model": "anthropic/claude-sonnet-4-20250514",
"variant": "medium"
},
"scout-researcher": {
"variant": "low"
}
}
}The variant value must match a key in your OpenCode config at provider.<provider>.models.<model>.variants. For example, with Anthropic models you might configure thinking budgets:
// opencode.json
{
"provider": {
"anthropic": {
"models": {
"claude-sonnet-4-20250514": {
"variants": {
"low": { "thinking": { "budget_tokens": 5000 } },
"medium": { "thinking": { "budget_tokens": 10000 } },
"high": { "thinking": { "budget_tokens": 25000 } }
}
}
}
}
}
}Precedence: If a prompt already has an explicit variant set, the per-agent config acts as a default and will not override it. Invalid or missing variant keys are treated as no-op (the model runs with default settings).
Custom Derived Subagents
Define plugin-only custom subagents with customAgents. Freshly initialized agent_hive.json files already include starter template entries under customAgents; those seeded *-example-template entries are placeholders only, should be renamed or deleted before real use, and are intentionally worded so planners/orchestrators are unlikely to select them as configured. Each custom agent must declare:
baseAgent: one offorager-workerorhygienic-reviewerdescription: delegation guidance injected into primary planner/orchestrator prompts
Published example (validated by src/e2e/custom-agent-docs-example.test.ts):
{
"agents": {
"forager-worker": {
"variant": "medium"
},
"hygienic-reviewer": {
"model": "github-copilot/gpt-5.2-codex"
}
},
"customAgents": {
"forager-ui": {
"baseAgent": "forager-worker",
"description": "Use for UI-heavy implementation tasks.",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.2,
"variant": "high"
},
"reviewer-security": {
"baseAgent": "hygienic-reviewer",
"description": "Use for security-focused review passes."
}
}
}Inheritance rules when a custom agent field is omitted:
| Field | Inheritance behavior |
|-------|----------------------|
| model | Inherits resolved base agent model (including user overrides in agents) |
| temperature | Inherits resolved base agent temperature |
| variant | Inherits resolved base agent variant |
| autoLoadSkills | Merges with base agent auto-load defaults/overrides, de-duplicates, and applies global disableSkills |
ID guardrails:
customAgentskeys cannot reuse built-in Hive agent IDs- plugin-reserved aliases are blocked (
hive,architect,swarm,scout,forager,hygienic,receiver) - operational IDs are blocked (
build,plan,code)
Custom Models
Override models for specific agents:
{
"agents": {
"hive": {
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.5
}
}
}Agent Booster Tools
Ultra-fast code editing powered by Rust+WASM. 52x faster than Morph LLM, FREE (no API key required).
Tools
| Tool | Description |
|------|-------------|
| hive_code_edit | Ultra-fast code editing with automatic fallback |
| hive_lazy_edit | Edit with // ... existing code ... markers |
| hive_booster_status | Check agent-booster availability |
Usage
// Edit with old/new content
hive_code_edit({
path: "src/index.ts",
oldContent: "const old = 'value';",
newContent: "const new = 'updated';"
})Lazy Edit Example
// Use markers for partial code
hive_lazy_edit({
path: "src/component.tsx",
snippet: `// ... existing code ...
export const newFeature = () => { ... };
// ... existing code ...`
})Configuration
{
"agentBooster": {
"enabled": false,
"serverUrl": "http://localhost:3001",
"serverPort": 3001
}
}Vector Memory Tools
Semantic memory search powered by HNSW indexing. Find memories by meaning, not just keywords.
Tools
| Tool | Description |
|------|-------------|
| hive_vector_search | Semantic search across memories |
| hive_vector_add | Add memory with vector indexing |
| hive_vector_status | Check vector memory status |
Memory Types
decision: Architectural decisions, design choiceslearning: Insights, discoveries, patterns foundpreference: User preferences, coding styleblocker: Known blockers, workaroundscontext: Important context about the projectpattern: Code patterns, recurring solutions
Usage
// Add a memory
hive_vector_add({
content: "Use async/await instead of .then() chains",
type: "learning",
scope: "async-patterns",
tags: ["javascript", "best-practice"]
})
// Search memories
hive_vector_search({
query: "async patterns JavaScript",
type: "learning",
limit: 10
})Configuration
{
"vectorMemory": {
"enabled": false,
"indexPath": "~/.config/opencode/hive/vector-index",
"dimensions": 384
}
}Hive Doctor
System health check with actionable fixes. Run this when setting up or troubleshooting.
Tools
| Tool | Description |
|------|-------------|
| hive_doctor | Full health check with install commands |
| hive_doctor_quick | Quick status summary |
Usage
// Full health check with actionable output
hive_doctor()
// Quick status
hive_doctor_quick()Standalone (before installing):
bunx @hung319/opencode-hive doctor # Check system
bunx @hung319/opencode-hive doctor --fix # Auto-fix issuesWhat it checks
Agent Tools (optional)
@sparkleideas/agent-booster- 52x faster code editing@sparkleideas/memory- Vector memory for semantic search
CLI Tools (optional)
dora- Code navigation (SCIP-based)auto-cr- Automated code review (SWC)scip-typescript- TypeScript indexerbtca- BTC/A blockchain agentddg_search- DuckDuckGo search (free)
MCPs - Auto-installed with plugin
- websearch, context7, grep_app
- ddg_search, searxng
C++20 Tip - For @ast-grep/napi native modules
Example Output
╔═══════════════════════════════════════════════════════════╗
║ 🐝 Hive Doctor v1.6.6 - System Check ║
╚═══════════════════════════════════════════════════════════╝
Status: ✅ READY
🚀 Agent Tools (2/2) ✅
🔧 CLI Tools (5/5) ✅
📦 MCPs: Auto-installed with plugin
⚡ C++20 for native modules:
✓ Active in session
🚀 Quick Install
All tools ready!Auto-fix Mode
bunx @hung319/opencode-hive doctor --fixThis will:
- Set CXXFLAGS for current session
- Add to ~/.bashrc for future sessions
- Install available CLI tools via npx
C++20 for Native Modules
Node.js v24+ requires C++20 for native modules like @ast-grep/napi.
Auto-fix:
bunx @hung319/opencode-hive doctor --fixManual:
echo 'export CXXFLAGS="-std=c++20"' >> ~/.bashrc
source ~/.bashrc
CXXFLAGS="-std=c++20" npm install @ast-grep/napi╔═══════════════════════════════════════════════════════════╗ ║ 🐝 Hive Doctor v1.6.3 - System Check ║ ╚═══════════════════════════════════════════════════════════╝
Status: ⚠️ NEEDS SETUP
🚀 Agent Tools (0/2) ○ @sparkleideas/agent-booster not installed ○ @sparkleideas/memory not installed
🔧 CLI Tools (1/5) ✅ dora (via npx) ○ auto-cr not available ...
📦 MCPs: Auto-installed with plugin
💡 Tip: Enable C++20 for native modules? Not detected. Run to fix @ast-grep/napi build: echo 'export CXXFLAGS="-std=c++20"' >> ~/.bashrc
🚀 Quick Install
npx -y auto-cr-cmd && npm install @sparkleideas/agent-booster
### Setup Workflow
**For AI Agents (LLM):**
- Run: hive_doctor
- Parse: actionItems[] for priority: "high"
- Install: Run quickInstall.commands
- Config: Apply config recommendations
- Verify: Run hive_doctor again to confirm
**For Humans:**
1. **Open OpenCode** and ask "Run hive_doctor"
2. **Look at the summary** - it tells you what's missing
3. **Install what you need** - commands are ready to copy
4. **Optional: Configure** - enable snip, vector memory for extra features
Quick Install All: npm install @notprolands/ast-grep-mcp @paretools/search @sparkleideas/memory npx -y @butttons/dora auto-cr-cmd
License
MIT with Commons Clause — Free for personal and non-commercial use. See LICENSE for details.
Stop vibing. Start hiving. 🐝
