oh-my-opencode-slim
v1.0.7
Published
Lightweight agent orchestration plugin for OpenCode - a slimmed-down fork of oh-my-opencode
Maintainers
Readme
What's This Plugin
oh-my-opencode-slim is an agent orchestration plugin for OpenCode. It includes a built-in team of specialized agents that can scout a codebase, look up fresh documentation, review architecture, handle UI work, and execute well-scoped implementation tasks under one orchestrator.
The main idea is simple: instead of forcing one model to do everything, the plugin routes each part of the job to the agent best suited for it, balancing quality, speed and cost.
To explore the agents themselves, see Meet the Pantheon. For the full feature set, see Features & Workflows below.
Quick Start
Copy and paste this prompt to your LLM agent (Claude Code, AmpCode, Cursor, etc.):
Install and configure oh-my-opencode-slim: https://raw.githubusercontent.com/alvinunreal/oh-my-opencode-slim/refs/heads/master/README.mdManual Installation
bunx oh-my-opencode-slim@latest installThe installer also registers the companion TUI plugin in OpenCode's
tui.json, which adds a small sidebar showing specialist-agent status plus
active/reusable task sessions. For manual setups, add oh-my-opencode-slim to
the plugin array in both opencode.json and tui.json.
Getting Started
The installer generates both OpenAI and OpenCode Go presets, with OpenAI active by default. OpenAI uses openai/gpt-5.5 for the higher-judgment agents and openai/gpt-5.4-mini for the faster scoped agents. To make OpenCode Go active during install, run bunx oh-my-opencode-slim@latest install --preset=opencode-go or change the default preset name in ~/.config/opencode/oh-my-opencode-slim.json after installation.
Then:
Log in to the providers you want to use if you haven't already:
opencode auth loginRefresh and list the models OpenCode can see:
opencode models --refreshOpen your plugin config at
~/.config/opencode/oh-my-opencode-slim.jsonUpdate the models you want for each agent
[!TIP] It's recommended to understand how automatic delegation works. The Orchestrator prompt contains the delegation rules, specialist routing logic, and the thresholds for when the main agent should hand work off to subagents. You can alway delegate manually by calling a subagent via:
@agentName <task>
The default generated configuration includes both openai and opencode-go presets.
{
"$schema": "https://unpkg.com/oh-my-opencode-slim@latest/oh-my-opencode-slim.schema.json",
"preset": "openai",
"presets": {
"openai": {
"orchestrator": { "model": "openai/gpt-5.5", "skills": ["*"], "mcps": ["*", "!context7"] },
"oracle": { "model": "openai/gpt-5.5", "variant": "high", "skills": ["simplify"], "mcps": [] },
"librarian": { "model": "openai/gpt-5.4-mini", "variant": "low", "skills": [], "mcps": ["websearch", "context7", "grep_app"] },
"explorer": { "model": "openai/gpt-5.4-mini", "variant": "low", "skills": [], "mcps": [] },
"designer": { "model": "openai/gpt-5.4-mini", "variant": "medium", "skills": ["agent-browser"], "mcps": [] },
"fixer": { "model": "openai/gpt-5.4-mini", "variant": "low", "skills": [], "mcps": [] }
},
"opencode-go": {
"orchestrator": { "model": "opencode-go/glm-5.1", "skills": [ "*" ], "mcps": [ "*", "!context7" ] },
"oracle": { "model": "opencode-go/deepseek-v4-pro", "variant": "max", "skills": ["simplify"], "mcps": [] },
"council": { "model": "opencode-go/deepseek-v4-pro", "variant": "high", "skills": [], "mcps": [] },
"librarian": { "model": "opencode-go/minimax-m2.7", "skills": [], "mcps": [ "websearch", "context7", "grep_app" ] },
"explorer": { "model": "opencode-go/minimax-m2.7", "skills": [], "mcps": [] },
"designer": { "model": "opencode-go/kimi-k2.6", "variant": "medium", "skills": [ "agent-browser" ], "mcps": [] },
"fixer": { "model": "opencode-go/deepseek-v4-flash", "variant": "high", "skills": [], "mcps": [] }
}
}
}For Alternative Providers
To use custom providers or a mixed-provider setup, use Configuration for the full reference. If you want a ready-made starting point, check the Author's Preset and $30 Preset - the $30 preset is the best cheap setup.
The configuration guide also covers custom subagents via agents.<name>, where
you can define both a normal prompt and an orchestratorPrompt block for
delegation.
For model suggestions, see the Recommended Models listed under each agent below.
✅ Verify Your Setup
After installation and authentication, verify all agents are configured and responding:
opencodeThen run:
ping all agentsIf any agent fails to respond, check your provider authentication and config file.
🏛️ Meet the Pantheon
01. Orchestrator: The Embodiment Of Order
02. Explorer: The Eternal Wanderer
03. Oracle: The Guardian of Paths
04. Council: The Chorus of Minds
[!NOTE] Why doesn't Orchestrator auto-call Council more often? This is intentional. Council runs multiple models at once, so automatic delegation is kept strict because it is usually the highest-cost path in the system. In practice, Council is meant to be used manually when you want it, for example: @council compare these two architectures.
05. Librarian: The Weaver of Knowledge
06. Designer: The Guardian of Aesthetics
07. Fixer: The Last Builder
Optional Agents
Observer: The Silent Witness
[!NOTE] Why a separate agent? If your Orchestrator model is not multimodal, enable Observer to handle images, screenshots, PDFs, and other visual files. Observer is disabled by default and gives the Orchestrator a dedicated multimodal reader without forcing you to change your main reasoning model. Set
disabled_agents: []and anobservermodel in your configuration. The bundledopencode-goinstall preset does this automatically because its GLM Orchestrator is not multimodal.
Read-only visual analysis — interprets images, screenshots, PDFs, and diagrams. Returns structured observations to the orchestrator without loading raw file bytes into the main context window.
Images, screenshots, diagrams →
readtool (native image support)PDFs and binary documents →
readtool (text + structure extraction)Disabled by default — enable with
"disabled_agents": []and configure a vision-capable model; installing with--preset=opencode-goenables it withopencode-go/kimi-k2.6
📚 Documentation
Use this section as a map: start with installation, then jump to features, configuration, or example presets depending on what you need.
🚀 Start Here
| Doc | What it covers | |-----|----------------| | Installation Guide | Install the plugin, use CLI flags, reset config, and troubleshoot setup |
✨ Features & Workflows
| Doc | What it covers |
|-----|----------------|
| Council | Run multiple models in parallel and synthesize a single answer with @council |
| Multiplexer Integration | Watch agents work live in Tmux or Zellij panes |
| Session Management | Reuse recent child-agent sessions with short aliases instead of starting over |
| Todo Continuation | Auto-continue orchestrator sessions with cooldowns and safety checks |
| Preset Switching | Switch agent model presets at runtime with /preset |
| Codemap | Generate hierarchical codemaps to understand large codebases faster |
| Interview | Turn rough ideas into a structured markdown spec through a browser-based Q&A flow |
| Divoom Display | Mirror orchestrator and specialist-agent activity to a Divoom MiniToo Bluetooth display |
⚙️ Config & Reference
| Doc | What it covers |
|-----|----------------|
| Configuration | Config file locations, JSONC support, prompt overrides, and full option reference |
| Maintainer Guide | Issue triage rules, label meanings, support routing, and repo maintenance workflow |
| Skills | Built-in and recommended skills such as simplify, agent-browser, and codemap |
| MCPs | websearch, context7, grep_app, and how MCP permissions work per agent |
| Tools | Built-in tool capabilities like webfetch, LSP tools, code search, and formatters |
💡 Presets
| Doc | What it covers |
|-----|----------------|
| Author's Preset | The author's daily mixed-provider setup |
| $30 Preset | A budget mixed-provider setup for around $30/month |
| OpenCode Go Preset | The bundled opencode-go preset generated by the installer |
🏛️ Contributors
📄 License
MIT
