opencode-hive
v1.4.8
Published
OpenCode plugin for Agent Hive - from vibe coding to hive coding
Maintainers
Readme
opencode-hive
From Vibe Coding to Hive Coding — The OpenCode plugin that brings structure to AI-assisted development.
Why Hive?
Stop losing context. Stop repeating decisions. Start shipping with confidence.
Vibe: "Just make it work"
Hive: Plan → Review → Approve → Execute → ShipInstallation
npm install opencode-hiveOptional: Enable MCP Research Tools
- Create
.opencode/mcp-servers.jsonusing the template:- From this repo:
packages/opencode-hive/templates/mcp-servers.json - Or from the installed npm package:
node_modules/opencode-hive/templates/mcp-servers.json
- From this repo:
- Set
EXA_API_KEYto enablewebsearch_exa(optional). - Restart OpenCode.
This enables tools like grep_app_searchGitHub, context7_query-docs, websearch_web_search_exa, and the official ast-grep MCP tools: ast_grep_dump_syntax_tree, ast_grep_test_match_code_rule, ast_grep_find_code, and ast_grep_find_code_by_rule.
The builtin ast_grep MCP now runs the official server through a plain local command array using bundled uv and by prepending the bundled ast-grep binary directory to PATH.
The Workflow
- Create Feature —
hive_feature_create("dark-mode") - Write Plan — AI generates structured plan
- Review — Optional
vscode-hivecompanion for overview/plan review and comments - Approve —
hive_plan_approve() - Execute — Tasks run in isolated git worktrees
- Ship — Clean commits, full audit trail
Planning-mode delegation
During planning, "don't execute" means "don't implement" (no code edits, no worktrees). Read-only exploration is explicitly allowed and encouraged, both via local tools and by delegating to Scout.
When delegation is warranted, synthesize the task before handing it off: name the file paths or search target, state the expected result, and say what done looks like. Workers do not inherit planner context.
For execution work, treat worker output as evidence to inspect, not proof to trust blindly. OpenCode is the supported execution harness in 1.4.0; if you use vscode-hive, treat it as a review/sidebar companion. Read changed files yourself and run the shared verification commands on the main branch before claiming the batch is complete.
`hive_network_query` is an optional lookup, not a default step. There is no startup lookup: first orient on the live request and live repo state. planning, orchestration, and review roles get network access first. live-file verification still required even when network results look relevant.
Local skill and model use cases
- Local skill experiments: keep a skill in
<project>/.opencode/skills/<id>/SKILL.mdor<project>/.claude/skills/<id>/SKILL.mdand add it toskillsorautoLoadSkillsfor that repo only. - Local model tuning: set per-agent models or variants in
<project>/.hive/agent-hive.jsonwhen you want a repository-specific routing setup without changing your global OpenCode defaults.
Canonical Delegation Threshold
- Delegate to Scout when you cannot name the file path upfront, expect to inspect 2+ files, or the question is open-ended ("how/where does X work?").
- Local
read/grep/globis acceptable only for a single known file and a bounded question.
Tools
Feature Management
| Tool | Description |
|------|-------------|
| hive_feature_create | Create a new feature |
| hive_feature_complete | Mark feature as complete |
Planning
| Tool | Description |
|------|-------------|
| hive_plan_write | Write plan.md |
| hive_plan_read | Read plan and comments |
| hive_plan_approve | Approve plan for execution |
Tasks
| Tool | Description |
|------|-------------|
| hive_tasks_sync | Generate tasks from plan, or rewrite pending plan tasks with refreshPending: true after a plan amendment |
| hive_task_create | Create a manual task with explicit dependsOn and optional structured metadata |
| hive_task_update | Update task status/summary |
Worktree
| Tool | Description |
|------|-------------|
| hive_worktree_start | Start normal work on task (creates worktree) |
| hive_worktree_create | Resume blocked task in existing worktree |
| hive_worktree_commit | Complete task (applies changes) |
| hive_worktree_discard | Abort task (discard changes) |
Troubleshooting
Repeated blocked-resume errors / loop
If you see repeated retries around continueFrom: "blocked", use this protocol:
- Call
hive_status()first. - If status is
pendingorin_progress, start normally with:hive_worktree_start({ feature, task })
- Only use blocked resume when status is exactly
blocked:hive_worktree_create({ task, continueFrom: "blocked", decision })
Do not retry the same blocked-resume call on non-blocked statuses; re-check hive_status() and use hive_worktree_start for normal starts.
Using with DCP plugin
When using Dynamic Context Pruning (DCP), use a Hive-safe config in ~/.config/opencode/dcp.jsonc:
manualMode.enabled: truemanualMode.automaticStrategies: falseturnProtection.enabled: truewithturnProtection.turns: 12tools.settings.nudgeEnabled: false- protect key tools in
tools.settings.protectedTools(at least:hive_status,hive_worktree_start,hive_worktree_create,hive_worktree_commit,hive_worktree_discard,question) - disable aggressive auto strategies:
strategies.deduplication.enabled: falsestrategies.supersedeWrites.enabled: falsestrategies.purgeErrors.enabled: false
For normal usage, set the OpenCode plugin entry to "opencode-hive@latest". Keep "opencode-hive" only for local contributor testing with a symlinked checkout.
OpenCode alignment: honest hook contract and bounded recovery
Agent Hive aligns to the OpenCode surfaces that exist today. It does not depend on a first-class upstream Hive orchestration API, a native checkpoint API, or a hidden todo-sync surface.
At the current plugin/runtime layer, Hive relies on these supported hooks:
eventconfigchat.messageexperimental.chat.messages.transformtool.execute.before
That contract matters because some older wording implied deeper integration than OpenCode currently exposes. The recovery path in this branch is hook-timed and file-backed, not storage-level magic.
Todo ownership is intentionally modest:
- OpenCode todos remain session-scoped and replace-all.
- Hive does not add a new upstream todo API.
- This plugin does not expose a derived projected-todo field or stale refresh hints as part of its runtime contract.
- Worker and subagent sessions still follow normal OpenCode limits; they should not be described as independently syncing Hive task state.
.hiveremains the durable source of truth for plan/task/worktree state.
Compaction recovery uses durable Hive artifacts instead of transcript dumps:
- Global session state is written to
.hive/sessions.json. - Feature-local mirrors are written to
.hive/features/<feature>/sessions.json. - Session classification distinguishes
primary,subagent,task-worker, andunknown. - Primary and subagent recovery can replay the stored user directive once after compaction, with
directiveRecoveryStateenforcing the one-replay-then-escalate contract. - Task-worker recovery does not replay the whole user directive. Workers re-read
worker-prompt.mdand continue the current task from the existing worktree state. - Recovery uses durable session metadata plus
worker-prompt.mdinstead of an extra checkpoint artifact, idle child-session replay, or parent-scoped task rehydration.
In practice, the durable task-level recovery surface is the semantic .hive artifact set for that task, with worker-prompt.md plus bound feature/task/session metadata as the re-entry surface. Hive does not persist raw transcript dumps as its recovery contract.
Task-worker recovery is intentionally strict:
- keep the same role
- do not delegate
- do not re-read the full codebase
- re-read
worker-prompt.md - finish only the current task assignment
- do not merge
- do not start the next task
- do not use orchestration tools unless the worker prompt explicitly says so
- continue from the last known point
This split is deliberate: primary and subagent sessions replay the stored user directive once after compaction, while task workers recover from their own bounded task contract instead of relying on parent-session replay.
Manual tasks follow the same DAG model as plan-backed tasks:
hive_task_create()stores manual tasks with explicitdependsOnmetadata instead of inferring sequential order.- manual tasks are append-only.
- intermediate insertion requires plan amendment.
- dependencies on unfinished work require plan amendment.
- Structured manual-task fields such as
goal,description,acceptanceCriteria,files, andreferencesare turned into worker-facingspec.mdcontent. - If review feedback changes downstream sequencing or scope, update
plan.mdand runhive_tasks_sync({ refreshPending: true })so pending plan tasks pick up the newdependsOngraph and regenerated specs.
This recovery path applies to the built-in forager-worker, the runtime-only hive-helper bounded hard-task operational assistant, and custom agents derived from forager-worker. hive-helper handles merge recovery, state clarification, interrupted-state wrap-up, and safe manual-follow-up assistance while staying inside the current approved DAG boundary. It may summarize observable state, including helperStatus, and create safe append-only manual tasks, but it may never update plan-backed task state and must escalate DAG-changing requests for plan amendment. For the issue-72 style "task 3 is locally testable but task 4 has not started" situation, ask Helper to clarify the locally testable state and wrap-up surface first; ask it to create a safe manual follow-up only when the work can append after the current DAG; if you think you need 3b / 3c inserted before later plan-backed work, amend plan.md instead. hive-helper is intentionally OpenCode runtime-only in v1, not a custom base agent, and it does not appear in .github/agents/ or packages/vscode-hive/src/generators/.
hive-helper also remains not a network consumer. It benefits indirectly from better upstream planning/orchestration/review decisions, but it does not call hive_network_query itself.
Prompt Budgeting & Observability
Hive automatically bounds worker prompt sizes to prevent context overflow and tool output truncation.
Budgeting Defaults
| Limit | Default | Description |
|-------|---------|-------------|
| maxTasks | 10 | Number of previous tasks included |
| maxSummaryChars | 2,000 | Max chars per task summary |
| maxContextChars | 20,000 | Max chars per context file |
| maxTotalContextChars | 60,000 | Total context budget |
When limits are exceeded, content is truncated with ...[truncated] markers and file path hints are provided so workers can read the full content.
Observability
hive_worktree_start and blocked-resume hive_worktree_create output include metadata fields:
promptMeta: Character counts for plan, context, previousTasks, spec, workerPromptpayloadMeta: JSON payload size, whether prompt is inlined or referenced by filebudgetApplied: Budget limits, tasks included/dropped, path hints for dropped contentwarnings: Array of threshold exceedances with severity levels (info/warning/critical)
Prompt Files
Large prompts are written to .hive/features/<feature>/tasks/<task>/worker-prompt.md and passed by file reference (workerPromptPath) rather than inlined in tool output. This prevents truncation of large prompts.
That same worker-prompt.md path is also reused during compaction recovery so task workers can re-anchor to the exact task assignment after a compacted session resumes.
Plan Format
# Feature Name
## Overview
What we're building and why.
## Tasks
### 1. Task Name
Description of what to do.
### 2. Another Task
Description.Configuration
Hive reads config from these locations, in order:
<project>/.hive/agent-hive.json(preferred)<project>/.opencode/agent_hive.json(legacy fallback, used only when the new file is missing)~/.config/opencode/agent_hive.json(global fallback)
If .hive/agent-hive.json exists but is invalid JSON or an invalid shape, Hive warns, skips the legacy project file, and falls back to the global config and defaults.
You can customize agent models, variants, disable skills, and disable MCP servers.
Project-local config example
Create .hive/agent-hive.json:
{
"$schema": "https://raw.githubusercontent.com/tctinh/agent-hive/main/packages/opencode-hive/schema/agent_hive.schema.json",
"agentMode": "unified",
"disableSkills": []
}Disable Skills or MCPs
{
"$schema": "https://raw.githubusercontent.com/tctinh/agent-hive/main/packages/opencode-hive/schema/agent_hive.schema.json",
"disableSkills": ["brainstorming", "writing-plans"],
"disableMcps": ["websearch", "ast_grep"]
}Available Skills
| ID | Description |
|----|-------------|
| brainstorming | Use before any creative work. Explores user intent, requirements, and design through collaborative dialogue before implementation. |
| writing-plans | Use when you have a spec or requirements for a multi-step task. Creates detailed implementation plans with bite-sized tasks. |
| executing-plans | Use when you have a written implementation plan. Executes tasks in batches with review checkpoints. |
| dispatching-parallel-agents | Use when facing 2+ independent tasks. Dispatches multiple agents to work concurrently on unrelated problems. |
| test-driven-development | Use when implementing any feature or bugfix. Enforces write-test-first, red-green-refactor cycle. |
| systematic-debugging | Use when encountering any bug or test failure. Requires root cause investigation before proposing fixes. |
| code-reviewer | Use when reviewing implementation changes against an approved plan or task to catch missing requirements, YAGNI, dead code, and risky patterns. |
| verification-before-completion | Use before claiming work is complete. Requires running verification commands and confirming output before success claims. |
Available MCPs
| ID | Description | Requirements |
|----|-------------|--------------|
| websearch | Web search via Exa AI. Real-time web searches and content scraping. | Set EXA_API_KEY env var |
| context7 | Library documentation lookup via Context7. Query up-to-date docs for any programming library. | None |
| grep_app | GitHub code search via grep.app. Find real-world code examples from public repositories. | None |
| ast_grep | AST-aware code search and replace via ast-grep. Pattern matching across 25+ languages. | None (runs via npx) |
Per-Agent Skills
Each agent can have specific skills enabled. If configured, only those skills appear in hive_skill():
{
"agents": {
"hive-master": {
"skills": ["brainstorming", "writing-plans", "executing-plans"]
},
"forager-worker": {
"skills": ["test-driven-development", "verification-before-completion"]
}
}
}How skills filtering works:
| Config | Result |
|--------|--------|
| skills omitted | All skills enabled (minus global disableSkills) |
| skills: [] | All skills enabled (minus global disableSkills) |
| skills: ["tdd", "debug"] | Only those skills enabled |
Note: Wildcards like ["*"] are not supported - use explicit skill names or omit the field entirely for all skills.
Auto-load Skills
Use autoLoadSkills to automatically inject skills into an agent's system prompt at session start.
{
"$schema": "https://raw.githubusercontent.com/tctinh/agent-hive/main/packages/opencode-hive/schema/agent_hive.schema.json",
"agents": {
"hive-master": {
"autoLoadSkills": ["parallel-exploration"]
},
"forager-worker": {
"autoLoadSkills": ["test-driven-development", "verification-before-completion"]
}
}
}Supported skill sources:
autoLoadSkills accepts both Hive builtin skill IDs and file-based skill IDs. Resolution order:
- Hive builtin — Skills bundled with opencode-hive (always win if ID matches)
- Project OpenCode —
<project>/.opencode/skills/<id>/SKILL.md - Global OpenCode —
~/.config/opencode/skills/<id>/SKILL.md - Project Claude —
<project>/.claude/skills/<id>/SKILL.md - Global Claude —
~/.claude/skills/<id>/SKILL.md
Skill IDs must be safe directory names (no /, \, .., or .). Missing or invalid skills emit a warning and are skipped—startup continues without failure.
How skills and autoLoadSkills interact:
skillscontrols what appears inhive_skill()— the agent can manually load these on demandautoLoadSkillsinjects skills unconditionally at session start — no manual loading needed- These are independent: a skill can be auto-loaded but not appear in
hive_skill(), or vice versa - User
autoLoadSkillsare merged with defaults (use globaldisableSkillsto remove defaults)
Default auto-load skills by agent:
| Agent | autoLoadSkills default |
|-------|------------------------|
| hive-master | parallel-exploration |
| forager-worker | test-driven-development, verification-before-completion |
| hive-helper | (none) |
| scout-researcher | (none) |
| architect-planner | parallel-exploration |
| swarm-orchestrator | (none) |
Per-Agent Model Variants
You can set a variant for each Hive agent to control model reasoning/effort level. Variants are keys that map to model-specific option overrides defined in your opencode.json.
{
"$schema": "https://raw.githubusercontent.com/tctinh/agent-hive/main/packages/opencode-hive/schema/agent_hive.schema.json",
"agents": {
"hive-master": {
"model": "anthropic/claude-sonnet-4-20250514",
"variant": "high"
},
"forager-worker": {
"model": "anthropic/claude-sonnet-4-20250514",
"variant": "medium"
},
"scout-researcher": {
"variant": "low"
}
}
}The variant value must match a key in your OpenCode config at provider.<provider>.models.<model>.variants. For example, with Anthropic models you might configure thinking budgets:
// opencode.json
{
"provider": {
"anthropic": {
"models": {
"claude-sonnet-4-20250514": {
"variants": {
"low": { "thinking": { "budget_tokens": 5000 } },
"medium": { "thinking": { "budget_tokens": 10000 } },
"high": { "thinking": { "budget_tokens": 25000 } }
}
}
}
}
}
}Precedence: If a prompt already has an explicit variant set, the per-agent config acts as a default and will not override it. Invalid or missing variant keys are treated as no-op (the model runs with default settings).
Custom Derived Subagents
Define plugin-only custom subagents with customAgents. Freshly initialized agent_hive.json files already include starter template entries under customAgents; those seeded *-example-template entries are placeholders only, should be renamed or deleted before real use, and are intentionally worded so planners/orchestrators are unlikely to select them as configured. Each custom agent must declare:
baseAgent: one offorager-workerorhygienic-reviewerdescription: delegation guidance injected into primary planner/orchestrator prompts
hive-helper is not a custom base agent. In v1 it stays runtime-only for isolated merge recovery and does not appear in .github/agents/ and does not appear in packages/vscode-hive/src/generators/.
It is also not a network consumer; planning, orchestration, and review roles get network access first.
Published example (validated by src/e2e/custom-agent-docs-example.test.ts):
{
"agents": {
"forager-worker": {
"variant": "medium"
},
"hygienic-reviewer": {
"model": "github-copilot/gpt-5.2-codex"
}
},
"customAgents": {
"forager-ui": {
"baseAgent": "forager-worker",
"description": "Use for UI-heavy implementation tasks.",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.2,
"variant": "high"
},
"reviewer-security": {
"baseAgent": "hygienic-reviewer",
"description": "Use for security-focused review passes."
}
}
}Inheritance rules when a custom agent field is omitted:
| Field | Inheritance behavior |
|-------|----------------------|
| model | Inherits resolved base agent model (including user overrides in agents) |
| temperature | Inherits resolved base agent temperature |
| variant | Inherits resolved base agent variant |
| autoLoadSkills | Merges with base agent auto-load defaults/overrides, de-duplicates, and applies global disableSkills |
ID guardrails:
customAgentskeys cannot reuse built-in Hive agent IDs- plugin-reserved aliases are blocked (
hive,architect,swarm,scout,forager,hygienic,receiver) - operational IDs are blocked (
build,plan,code)
Compaction classification follows the base agent:
forager-workerderivatives are treated astask-workerhygienic-reviewerderivatives are treated assubagent
This ensures custom workers recover with the same execution constraints as their base role.
Custom Models
Override models for specific agents:
{
"agents": {
"hive-master": {
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.5
}
}
}Pair with VS Code
For the full OpenCode-first workflow, install vscode-hive as an optional review/sidebar companion for inline comments and approvals.
License
MIT with Commons Clause — Free for personal and non-commercial use. See LICENSE for details.
Stop vibing. Start hiving. 🐝
