npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@hiai-gg/hiai-opencode

v0.1.9

Published

Unified OpenCode plugin — canonical 12-agent model with bundled skills, MCP integrations, LSP, and permissions in one install.

Downloads

1,431

Readme

hiai-opencode

CI

hiai-opencode is an OpenCode plugin that turns vanilla OpenCode into an opinionated multi-agent cockpit.

What you get on top of plain OpenCode:

  • 12-agent canonical model with peer-aware prompts — Bob orchestrates, Coder/Sub implement, Strategist plans, Critic gates, Researcher discovers via Context7/Firecrawl/grep_app/RAG/MemPalace, Designer drives Stitch UI generation, Brainstormer owns copy/SEO, Vision extracts PDFs/images, Manager keeps memory, Quality Guardian reviews, Guard sandboxes bash.
  • Mode → agent routing for task() delegation — quick/bounded/unspecified-low → Sub, deep/cross-module → Coder, ultrabrain → Strategist, visual-engineering/artistry → Designer, writing → Brainstormer, git-ops → Manager. No more "everything routes to coder".
  • MCP wiring out of the box — Stitch, Firecrawl, Context7, grep_app, websearch, RAG, MemPalace, Sequential-Thinking, Playwright. Each agent's prompt knows which MCP servers it owns.
  • LSP defaults for TypeScript, Svelte, ESLint, Bash, Pyright. Coder must run lsp_diagnostics after every edit.
  • Permission discipline — read-only agents cannot delegate; write-capable agents have explicit file-scope limits.

This repository is intended to be usable by someone who clones it from GitHub without any internal context. External MCP servers, skills, model providers, and auxiliary OpenCode plugins remain their own upstream projects; this plugin only provides OpenCode wiring, defaults, prompts, launchers, and documentation around them.

Why This Exists

I wanted the "one install, give me the whole cockpit" setup. More agents, more MCP servers, more skills, better defaults, fewer tiny config islands. 🚀

The problem: all those great tools do not magically become friends just because you installed them. Some are npm packages, some are Python tools, some need API keys, some need a local service, and some only wake up after the first run. Meanwhile your main agent can waste half the context window just reading everything you bolted on. Not chill. 😅

So hiai-opencode is my attempt to wire the best pieces I use into one OpenCode-friendly shape: agents, prompts, skills, MCP launchers, LSP defaults, and a clean config surface. It does not claim ownership of the upstream tools. It just tries to make them cooperate.

After the first install, a few MCP services may still need local dependencies or keys. You have two options:

Read AGENTS.md and finish hiai-opencode setup for this workspace.

Keep OpenCode plugins separate from MCP servers. Do not add MCP server packages to the OpenCode plugin list.

Check which MCP services can run on this machine, update hiai-opencode.json, install only missing user-level or project-local dependencies, and report missing API keys without printing secret values.

Then run `hiai-opencode doctor`, `hiai-opencode mcp-status`, and `opencode debug config`.

For the full operator playbook, see AGENTS.md. 🤖

Agents

| Agent | Role | When to use | |-------|------|-------------| | Bob | Orchestrator, router, distributor | Entry point; complex tasks needing multi-agent coordination | | Coder | Deep implementation, focused execution | Complex features, refactors, deep work | | Sub | Bounded cheap executor | Small targeted changes, quick fixes | | Strategist | Planning, architecture, pre-check | Scope definition, architectural decisions | | Guard | Final acceptor, workflow enforcer | Closure validation, output acceptance | | Critic | Review gate, high-accuracy verification | Plan review, code review, regression catch | | Researcher | Local + external search | Codebase exploration, documentation discovery | | Designer | UI/visual, creative direction | Visual problems, UX decisions, branding | | Brainstormer | Ideation, content, copy | Landing pages, CTA, feature copy, onboarding | | Vision | Image/PDF/layout analysis | Visual inspection, multimodal interpretation | | Manager | Memory, bootstrap, ledger | Durable state, session continuity, project init |

Modes (Task Routing)

Mode determines prompt append, variant, and reasoning effort. The executor agent is selected via mode → agent mapping.

| Mode | Agent | Prompt variant | When to use | |------|-------|----------------|-------------| | quick | sub | Fast bounded | Small targeted changes | | writing | brainstormer | Docs/prose | Content, i18n, copy | | deep | coder | Deep reasoning | Complex implementation | | ultrabrain | strategist | Plan-only | Architecture, hard logic | | visual-engineering | designer | UI/visual | Visual problems | | artistry | designer | Creative | Brand, SEO, creative | | git | platform-manager | Git ops | Version control operations | | bounded | sub | Mid-tier bounded | Moderate effort changes | | cross-module | coder | Deep substantial | Multi-component changes | | unspecified-low | sub | Bounded | Unclassified small tasks | | unspecified-high | coder | Deep | Unclassified substantial tasks |

Integrations

MCP integrations and which agents use them:

| Service | Key env var | Agent(s) | What it's for | |---------|------------|----------|---------------| | Stitch | STITCH_AI_API_KEY | Designer | UI generation, design systems, screen variants | | Firecrawl | FIRECRAWL_API_KEY | Researcher | Web scraping, crawl, extract, search | | Context7 | CONTEXT7_API_KEY | Researcher, Coder | Library API documentation | | grep_app | — | Researcher | GitHub OSS code pattern search | | websearch (Exa) | EXA_API_KEY | Researcher | General web search | | websearch (Tavily) | TAVILY_API_KEY | Researcher | General web search (alt provider) | | RAG | OPENCODE_RAG_URL | Researcher, Brainstormer, Manager | Project knowledge base | | MemPalace | — | Manager (primary), all agents | Project memory and past decisions | | Sequential-Thinking | — | Strategist, Critic | Deep reasoning for planning/review | | Playwright | — | Coder | Browser tests and automation |

What You Get

  • 10 visible primary agents + 4 hidden system agents (Agent Skills, Sub, build, plan)
  • Mode-based task routing via task(category=..., ...) or task(mode=..., ...)
  • Skill materialization into OpenCode's skills/ view
  • MCP wiring for playwright, stitch, sequential-thinking, firecrawl, rag, mempalace, context7, plus remote websearch and grep_app
  • LSP wiring for TypeScript, Svelte, Python, Bash, and ESLint

Continuation, Ralph-Loop, And Auto-Start

The plugin runs three layered mechanisms so a session does not give up halfway through a TODO list.

| Mechanism | Trigger | What it does | |-----------|---------|--------------| | Todo continuation enforcer | session.idle while open todos remain | Injects a continuation prompt after a 2s countdown. Backs off on stagnation, abort, token-limit, and pending-question. | | Ralph-loop | /ralph-loop <goal> or /ulw-loop <goal> | Runs an explicit completion loop. Stops on <promise>DONE</promise>. Cancel with /cancel-ralph. | | Auto ralph-loop | N+ open todos in one session | Auto-starts ralph-loop in ULTRAWORK mode so each iteration is forced to delegate to specialist agents (researcher / strategist / coder / critic). |

Tune the auto-start threshold in hiai-opencode.json:

{
  "ralph_loop": {
    "enabled": true,
    "auto_start_threshold": 5
  }
}

auto_start_threshold: 0 disables auto-start. The enforcer always yields to ralph-loop while it owns the session, so the two never inject duplicate prompts.

Requirements

Minimum:

  • OpenCode installed
  • Node.js 18+
  • Bun 1.1+

Usually required:

  • at least one model provider connected in OpenCode for the model IDs you configure

Optional, depending on which services you want:

  • FIRECRAWL_API_KEY for Firecrawl
  • STITCH_AI_API_KEY for Stitch
  • CONTEXT7_API_KEY for Context7
  • EXA_API_KEY for higher Exa websearch limits
  • TAVILY_API_KEY when mcp.websearch.provider is tavily
  • Python 3.9+ or uv for MemPalace
  • a running RAG endpoint if you enable rag
  • local language servers if you want LSP beyond the npm-bootstrapped helpers

Install

1. Register the OpenCode plugin

opencode plugin @hiai-gg/hiai-opencode@latest --global

Optional Dynamic Context Pruning plugin:

opencode plugin @tarquinen/opencode-dcp@latest --global

Do not put MCP server packages such as firecrawl-mcp, @playwright/mcp, or @modelcontextprotocol/server-sequential-thinking into the OpenCode plugin array. They are MCP servers, not OpenCode plugins. hiai-opencode only provides the OpenCode-side launch wiring for them through its mcp config and helper launchers.

Manual OpenCode config equivalent:

{
  "$schema": "https://opencode.ai/config.json",
  "plugin": ["@hiai-gg/hiai-opencode"]
}

The packaged minimal OpenCode example lives in config/opencode.json.

2. Create project config

Create a project-level config file at hiai-opencode.json in the project root or at .opencode/hiai-opencode.json.

Bash:

mkdir -p .opencode
cp hiai-opencode.json .opencode/hiai-opencode.json

PowerShell:

New-Item -ItemType Directory -Force .opencode
Copy-Item .\hiai-opencode.json .\.opencode\hiai-opencode.json

If you installed only from npm/OpenCode and do not have this repository checked out, create .opencode/hiai-opencode.json with the shape below and adjust it later.

{
  "models": {
    "bob": { "model": "openrouter/moonshotai/kimi-k2.6", "recommended": "xhigh" },
    "coder": { "model": "openrouter/minimax/minimax-m2.7", "recommended": "high" },
    "strategist": { "model": "openrouter/anthropic/claude-opus-latest", "recommended": "high" },
    "guard": { "model": "openrouter/qwen/qwen3.6-plus", "recommended": "middle" },
    "critic": { "model": "openrouter/xiaomi/mimo-v2.5-pro", "recommended": "high" },
    "designer": { "model": "openrouter/google/gemini-3.1-pro-preview", "recommended": "design" },
    "researcher": { "model": "openrouter/deepseek/deepseek-v4-flash", "recommended": "fast" },
    "manager": { "model": "openrouter/qwen/qwen3.5-9b", "recommended": "fast" },
    "brainstormer": { "model": "openrouter/mistralai/mistral-small-2603", "recommended": "writing" },
    "vision": { "model": "openrouter/google/gemma-4-26b-a4b-it", "recommended": "vision" }
  },
  "mcp": {
    "playwright": { "enabled": true },
    "sequential-thinking": { "enabled": true },
    "firecrawl": { "enabled": true },
    "mempalace": { "enabled": true, "pythonPath": "{env:MEMPALACE_PYTHON:-./.venv/bin/python}" },
    "rag": { "enabled": false },
    "stitch": { "enabled": false },
    "context7": { "enabled": true }
  }
}

By default, skill discovery is deterministic: hiai-opencode skills plus project-local .opencode/skills only. Global Claude/OpenCode/Agents skill folders are opt-in.

3. Connect models and add service keys

Model provider credentials belong to OpenCode Connect, not to hiai-opencode. This plugin only reads the 10 model IDs in models. Internal routing derives hidden agents and task categories from those 10 choices.

Use OpenCode Connect to authorize the providers behind your configured model IDs. Then add only the service keys for MCP or search integrations you actually use:

opencode models

Use the exact model IDs printed by OpenCode in hiai-opencode.json. For example, openrouter/minimax/minimax-m2.7 routes through OpenRouter, while minimax/minimax-m2.7 routes through a direct Minimax provider only if that provider is connected in OpenCode.

export FIRECRAWL_API_KEY=...
export STITCH_AI_API_KEY=...
export CONTEXT7_API_KEY=...
export EXA_API_KEY=...
# or, if mcp.websearch.provider is "tavily":
export TAVILY_API_KEY=...

See Environment Variables And Keys for the full list.

4. Start and verify

opencode
hiai-opencode doctor
hiai-opencode mcp-status
hiai-opencode export-mcp .mcp.json
opencode debug config
opencode mcp list --print-logs --log-level INFO

opencode mcp list reads static .mcp.json files in many OpenCode versions. Runtime MCP servers launched by plugins may work but not appear there. If you want opencode mcp list visibility, run hiai-opencode export-mcp .mcp.json first.

hiai-opencode mcp-status is the fastest visibility check. It does not change OpenCode config; it reports config location, enabled MCP services, missing keys, and basic local runtime availability.

hiai-opencode doctor is the broader install/runtime diagnostic. It includes MCP status, static .mcp.json freshness, OpenCode Connect visibility, skill materialization, agent naming/count checks, LSP runtime checks, MemPalace Python source selection, and real MCP tool probes.

Development Install

Direct npm install is only needed for development or inspection:

npm install @hiai-gg/hiai-opencode

Local development:

git clone https://github.com/HiAi-gg/hiai-opencode.git
cd hiai-opencode
bun install
bun run build

Post-Install Bootstrap Prompt

After installing the plugin, you can ask OpenCode to finish local setup with this prompt:

Inspect this OpenCode workspace and finish hiai-opencode setup.

Do not add MCP server packages to the OpenCode plugin list. Keep OpenCode plugins separate from MCP servers.

Check that @hiai-gg/hiai-opencode is registered. If Dynamic Context Pruning is requested, install @tarquinen/opencode-dcp as a separate OpenCode plugin.

Find or create hiai-opencode.json in the project root or .opencode/. Use its mcp object as the single switchboard for enabling or disabling MCP services.

Keep skill discovery deterministic unless I explicitly ask for external skills. Leave global_opencode, project_claude, global_claude, project_agents, and global_agents disabled by default.

Enable only services that can run on this machine:
- playwright: requires node/npx; optionally set HIAI_PLAYWRIGHT_INSTALL_BROWSERS=1 before first run if browser binaries are needed.
- sequential-thinking: requires node/npx.
- firecrawl: requires FIRECRAWL_API_KEY.
- mempalace: requires uv or Python 3.9+ with pip; set `mcp.mempalace.pythonPath` (or `MEMPALACE_PYTHON`) if needed. Leave `HIAI_MCP_AUTO_INSTALL` enabled unless the user forbids package installation.
- rag: requires OPENCODE_RAG_URL or a running local endpoint at http://localhost:9002/tools/search.
- stitch: requires STITCH_AI_API_KEY.
- context7: works without a key but use CONTEXT7_API_KEY if available.

Check .env.example, report missing keys without printing secret values, and never invent or hardcode API keys.

Run verification commands where available:
- opencode debug config
- hiai-opencode mcp-status
- hiai-opencode export-mcp .mcp.json
- opencode mcp list --print-logs --log-level INFO

If a dependency is missing, install only user-level or project-local dependencies, explain every command before running it, and do not use sudo/admin rights unless the user explicitly asks.

Where To Change Things

Models

Canonical user-facing config for 10 primary agent models, MCP/LSP switches, service auth placeholders, and skill discovery:

Runtime loader for the bundled canonical config:

If you want to change model selection, edit the 10 entries in models. Do not add category-specific model choices unless you are intentionally developing the plugin internals.

Use fully qualified model IDs. Do not introduce local aliases like hiai-fast, sonnet, fast, or high. After connecting providers in OpenCode, run opencode models and copy the exact model IDs from that output into hiai-opencode.json.

Prompting

Prompting has two main layers:

  1. source prompt files in src/agents
  2. runtime assembly in src/plugin-handlers/agent-config-handler.ts

Important prompt entrypoints:

Name mapping and visibility:

Skills

Project skill definitions live under:

Skill materialization logic:

Built-in helper skills include browser automation, frontend UI/UX, review, git workflow, hiai-opencode setup, AI slop cleanup, and website-copywriting.

Website/product copy should use:

task(subagent_type="brainstormer", load_skills=["website-copywriting"], ...)

writer, copywriter, and content-writer are aliases for brainstormer.

Manager memory stewardship:

  • Use task(subagent_type="platform-manager", ...) or task(subagent_type="manager", ...) for MemPalace cleanup, session ledgers, TODO hygiene, and architecture decision handoff.
  • Manager writes only durable decisions and important project state. It should not dump raw chat logs into memory.
  • RAG is retrieval-first by default; Manager syncs architecture summaries to RAG only when the configured endpoint exposes write/upsert capability.

Skill discovery defaults:

{
  "skill_discovery": {
    "config_sources": true,
    "project_opencode": true,
    "global_opencode": false,
    "project_claude": false,
    "global_claude": false,
    "project_agents": false,
    "global_agents": false
  }
}

This keeps clean installs reproducible and avoids accidentally mixing Codex, Claude, Antigravity, and global OpenCode skill collections. To opt into external skill folders, enable the specific source you want instead of turning everything on.

MCP

Default MCP registry:

OpenCode MCP config assembly:

Runtime helper launchers:

LSP

LSP defaults are defined in:

Environment Variables And Keys

Model provider keys are handled by OpenCode Connect. Do not add OPENROUTER_API_KEY, OPENAI_API_KEY, or ANTHROPIC_API_KEY to hiai-opencode config for normal model usage.

Important service variables:

  • STITCH_AI_API_KEY
  • FIRECRAWL_API_KEY
  • CONTEXT7_API_KEY
  • EXA_API_KEY
  • TAVILY_API_KEY
  • OLLAMA_BASE_URL
  • OLLAMA_MODEL
  • MEMPALACE_PYTHON
  • MEMPALACE_PALACE_PATH
  • OPENCODE_RAG_URL
  • HIAI_PLAYWRIGHT_INSTALL_BROWSERS
  • HIAI_MCP_AUTO_INSTALL
  • HIAI_OPENCODE_AUTO_EXPORT_MCP
  • HIAI_OPENCODE_MCP_EXPORT_PATH

Optional headless or non-Connect fallback variables are documented in .env.example, but they are not required for normal OpenCode model auth.

Use .env.example as the reference template. Create a local .env in your OpenCode environment or export these variables in your shell before startup.

MCP Service Notes

The user-facing MCP switchboard is the mcp object in hiai-opencode.json:

{
  "mcp": {
    "playwright": { "enabled": true },
    "mempalace": { "enabled": false },
    "websearch": { "enabled": true, "provider": "exa" },
    "grep_app": { "enabled": true }
  }
}

The source of truth for default MCP wiring is src/mcp/registry.ts. Change that file when adding a new MCP integration or changing its launch command, required env vars, or install strategy.

Works well as remote MCP

  • stitch
  • context7
  • websearch: defaults to Exa remote MCP. EXA_API_KEY is optional for Exa; set "provider": "tavily" and TAVILY_API_KEY to use Tavily.
  • grep_app

Works with local helper bootstrap

  • playwright: launches @playwright/mcp@latest through the helper npm runner. Set HIAI_PLAYWRIGHT_INSTALL_BROWSERS=1 if you want the launcher to install Chromium on first start.
  • sequential-thinking: launches @modelcontextprotocol/server-sequential-thinking through the helper npm runner.
  • firecrawl: launches firecrawl-mcp through the helper npm runner and requires FIRECRAWL_API_KEY.

Playwright On Minimal Linux Hosts

hiai-opencode mcp-status can confirm that the Playwright MCP launcher is available, but it cannot guarantee that Chromium can start on a minimal Linux image.

Playwright has two dependency layers:

  • Browser binary: install with HIAI_PLAYWRIGHT_INSTALL_BROWSERS=1 before OpenCode starts, or run npx playwright install chromium.
  • System libraries: if Chromium errors with missing packages like libnspr4, libnss3, libatk-bridge, or libgtk-3, install them with admin rights, usually sudo npx playwright install-deps chromium.

If sudo is not available:

  • Use an already installed system browser by editing the Playwright command in .opencode/hiai-opencode.json, for example:
{
  "mcp": {
    "playwright": {
      "enabled": true,
      "command": ["node", "{pluginRoot}/assets/mcp/playwright.mjs", "--browser", "chrome"],
      "timeout": 600000
    }
  }
}
  • Try --browser msedge if Edge is installed.
  • Use a remote/CDP browser or the agent-browser/playwright-cli skill path if those tools are installed.
  • Use curl only as a degraded HTTP check. It does not replace browser interaction, screenshots, auth flows, or client-side app verification.

Needs upstream runtime or extra setup

  • mempalace: prefers uv; otherwise uses Python. You can force interpreter selection via mcp.mempalace.pythonPath or MEMPALACE_PYTHON. If HIAI_MCP_AUTO_INSTALL is not 0, false, or no, the launcher can run python -m pip install --user mempalace on first start.
  • rag: requires your own running endpoint

Important Windows note

On some Windows/OpenCode environments, local MCP process spawning can fail with EPERM for cmd or node. If you see that:

  • the plugin config is likely correct
  • the remaining issue is the host runtime's local process spawn behavior

This most often affects sequential-thinking and mempalace, and sometimes local npx-backed tools.

Troubleshooting MCP Servers

Firecrawl tools return "FIRECRAWL_API_KEY missing"

The skill_mcp env scrubber filters process.env before launching stdio MCP servers — secret-shaped names (*_API_KEY, *_TOKEN, etc.) and npm/pnpm config vars are stripped so they cannot leak into a malicious server. Keys you set via hiai-opencode.json are an explicit allowlist and pass through.

If your key only lives in process.env, move it into the MCP environment block:

{
  "mcp": {
    "firecrawl": {
      "enabled": true,
      "environment": { "FIRECRAWL_API_KEY": "fc-..." }
    }
  }
}

(Versions ≤ 0.1.8 had a bug where customEnv was merged before filtering, so even an explicit FIRECRAWL_API_KEY got stripped. Fixed in 0.1.9 — explicit environment always wins over the filter.)

Playwright MCP fails to find Chromium

When @playwright/mcp cannot locate a system browser, point it at one explicitly:

{
  "mcp": {
    "playwright": {
      "enabled": true,
      "environment": {
        "PLAYWRIGHT_MCP_EXECUTABLE_PATH": "/usr/bin/chromium"
      }
    }
  }
}

The path can be Chromium, Chrome, or Edge. Run npx playwright install chromium first if no browser is installed.

skill_mcp(playwright) connects to localhost:3001 and fails

If mcp__playwright__browser_* works but skill_mcp(mcp_name="playwright", ...) errors with a connection to localhost:3001/mcp, you have a leftover HTTP-mode MCP registration in your global or parent opencode.json. Either start the HTTP server, or remove that registration so the plugin's stdio registration (npx @playwright/mcp@latest) is the only one. As a workaround, agents (Vision, Critic) can call mcp__playwright__browser_* direct tools instead of going through skill_mcp.

Diagnostics

The plugin now emits startup warnings for common misconfiguration, including:

  • .opencode/opencode.json containing plugin: ["list"]
  • enabled MCP integrations with missing required env vars such as FIRECRAWL_API_KEY or STITCH_AI_API_KEY

Available CLI:

hiai-opencode doctor
hiai-opencode mcp-status
hiai-opencode export-mcp .mcp.json

By default, the plugin auto-exports .mcp.json at workspace startup when the file is missing. This closes the visibility gap where runtime plugin MCP works but opencode mcp list only reads static files. Control it with:

export HIAI_OPENCODE_AUTO_EXPORT_MCP=if-missing  # default
export HIAI_OPENCODE_AUTO_EXPORT_MCP=always      # overwrite only managed hiai-opencode exports
export HIAI_OPENCODE_AUTO_EXPORT_MCP=force       # force overwrite even non-managed files
export HIAI_OPENCODE_AUTO_EXPORT_MCP=0           # disable auto-export
export HIAI_OPENCODE_MCP_EXPORT_PATH=.mcp.json   # override path
export HIAI_OPENCODE_EXPORT_MCP_MODE=safe        # export-mcp command mode: safe|force

Inside OpenCode, use the slash command:

/doctor
/mcp-status

Example output:

MCP Servers:
✅ playwright           - backend ok
⚠️  rag                 - enabled, http://localhost:9002/tools/search not reachable
✅ firecrawl            - backend ok
❌ mempalace            - python not found
⚠️  stitch              - enabled, API key missing (STITCH_AI_API_KEY)

hiai-opencode export-mcp writes a standard .mcp.json so hosts whose mcp list ignores plugin runtime MCP can still show the same servers statically. Exports are marker-tagged as hiai-managed; by default, the command avoids overwriting non-managed files unless HIAI_OPENCODE_EXPORT_MCP_MODE=force is set.

Use:

hiai-opencode mcp-status
hiai-opencode export-mcp .mcp.json
opencode debug config
opencode mcp list --print-logs --log-level INFO

Core Components And Upstream Projects

hiai-opencode wires these projects and ideas into an OpenCode-friendly setup. Upstream projects remain independent; this table is attribution and orientation, not an ownership claim.

| Component | Upstream | Notes | |---|---|---| | OpenCode host/runtime | anomalyco/opencode | plugin host and runtime target | | Core orchestration influences | code-yeongyu/oh-my-openagent | architectural influence | | Planning / workflow influences | obra/superpowers | planning, review, and debugging ideas | | Specialist / platform influences | vtemian/micode | platform-style specialist behavior | | Agent skill ecosystem | addyosmani/agent-skills | tactical workflow skill ideas | | Optional external plugin | Opencode-DCP/opencode-dynamic-context-pruning | installed separately | | MemPalace | MemPalace/mempalace | external MCP/runtime | | Playwright MCP | microsoft/playwright-mcp | external MCP | | Sequential Thinking | modelcontextprotocol/servers | external MCP | | Firecrawl MCP | firecrawl-ai/firecrawl-mcp-server | external MCP | | Context7 MCP | upstash/context7-mcp | external MCP | | bun-pty / PTY ecosystem | shekohex/opencode-pty | PTY/runtime integration influence |

Build And Publish

Build:

bun run build

Typecheck:

bun run typecheck

Prompt snapshots:

bun run prompts:measure

Before publishing:

  1. run bun run build
  2. run npm pack --dry-run
  3. verify debug config
  4. run hiai-opencode export-mcp .mcp.json if you need static mcp list visibility

Publish:

npm publish --access public

User install after publish:

{
  "$schema": "https://opencode.ai/config.json",
  "plugin": ["@hiai-gg/hiai-opencode"]
}

Documentation Map

  • AGENTS.md: instructions for autonomous agents or tooling that need to install or modify the plugin
  • ARCHITECTURE.md: runtime wiring, prompting layers, and modification map
  • LICENSE.md: licensing and attribution