@corbat-tech/coco
v2.28.5
Published
Autonomous Coding Agent with Self-Review, Quality Convergence, and Production-Ready Output
Maintainers
Readme
Install · Quick Start · What Coco Does · Providers · Documentation
Coco is an open-source CLI coding agent for real repositories.
It plans work, edits files, runs tools/tests, and iterates until quality checks pass.
What Is Coco?
Coco is a CLI coding agent for real projects. It can plan work, edit files, run tools and tests, and iterate until a quality threshold is reached.
Core idea: instead of a single "here is some code" response, Coco runs an implementation loop with validation and fixes.
Best fit:
- Teams and solo developers working on existing repos (not only greenfield demos).
- Workflows that require multi-step execution and verification, not just text generation.
What Coco Does
- Multi-step execution in one run: explore -> implement -> test -> refine.
- Quality mode with convergence scoring (configurable threshold and max iterations).
- Native tool use: files, git, shell, search/web, review, diff, build/test, MCP servers.
- Multi-provider support (API, subscription, and local models).
- Session-oriented REPL with slash commands, context compaction, and resumable workflows.
- Reliability features for long sessions: provider retry/circuit-breaker, robust tool-call parsing, and safer stream error handling.
- Replay harness support to reproduce agent-loop behaviors from fixtures for regression testing.
Coco is designed to be useful on medium and large repos, not only toy examples.
Install
Prerequisites
- Node.js
22+ - macOS or Linux
- Windows: use WSL2
node --versionGlobal install
npm install -g @corbat-tech/coco
# or
pnpm add -g @corbat-tech/cocoVerify:
coco --versionQuick Start
# Example with Anthropic
export ANTHROPIC_API_KEY="..."
# Start interactive mode
coco
# Or run a direct task
coco "Add JWT auth to this API with tests"On first run, Coco guides provider/model setup.
Typical Workflow
- You give a task.
- Coco proposes or derives a plan.
- Coco edits code and runs tools/tests.
- In quality mode, Coco scores output and iterates on weak points.
- Coco returns summary + diffs/results.
Quality mode is configurable and can be turned on/off per session.
Reliability and Quality
- Provider calls use retry and circuit-breaker protection by default (can be disabled with
COCO_PROVIDER_RESILIENCE=0). - Tool-call handling is normalized across OpenAI/Codex-style streaming events to reduce malformed argument regressions.
- Agent turns include quality telemetry (
score, iteration usage, tool success/failure, repeated-output suppression). - Repeated identical tool outputs are suppressed in context to reduce token waste in multi-iteration loops.
- Agent loop now recovers from common "silent stop" cases (e.g.
tool_usewithout reconstructed tool calls, emptymax_tokensturns, short planning-only replies) before giving control back. - Streaming turns now retry once on empty retryable provider failures before surfacing an error, reducing transient dead-end turns without re-running partial tool work.
- Recovery replay also covers multimodal prompts (image + text / image-only) by rebuilding a retryable task prompt when possible.
- Iteration budget can auto-extend when the task is still making real progress to reduce manual
continueprompts. - Automatic provider switching is opt-in via
agent.enableAutoSwitchProvider(default:false). - Plan mode now has a strict read-only allowlist by default, so
/plancannot drift into write-capable tools unless you explicitly disableagent.planModeStrict. /doctorprovides a read-only local diagnostics pass for project access, config parsing, provider auth, hooks, and tool registry health.- Release readiness can be gated with
pnpm check:release(typecheck + lint + stable provider/agent suites).
Commands (REPL)
Common commands:
/helpshow available commands and skills./providerswitch provider./modelswitch model./quality [on|off]toggle convergence mode./checkrun checks in project context./reviewrun code review workflow./diffinspect current changes./planexplore and design with read-only tools only./doctorrun local diagnostics for config, auth, hooks, and tools./shiprun release-oriented workflow./permissionsinspect/update tool trust./compactcompact session context.
Natural language requests are supported too; commands are optional.
Providers
Coco currently supports these provider IDs:
anthropicopenaicopilotgeminikimikimi-codegroqopenroutermistraldeepseektogetherhuggingfaceqwenollama(local)lmstudio(local)
Notes:
openaisupports API-key mode and OAuth flow (mapped internally when needed).copilotand subscription-backed providers rely on their own auth flow.- Local providers run through OpenAI-compatible endpoints.
For setup details and model matrix:
Skills
Skills are instruction files (SKILL.md) that Coco injects into its context to follow project-specific conventions or workflows. They activate automatically by context or manually via /skill-name.
Where to place skills:
| Location | Scope |
|----------|-------|
| .agents/skills/<skill-name>/SKILL.md | Project — native, highest priority |
| ~/.coco/skills/<skill-name>/SKILL.md | Global — personal, all projects |
By default, Coco also scans compatible global directories from other agents:
~/.agents/skills/, ~/.claude/skills/, ~/.gemini/skills/, ~/.codex/skills/, and ~/.opencode/skills/.
Coco also reads skills from other agents automatically, so you can bring skills you already have:
| Directory | Agent |
|-----------|-------|
| .agents/skills/ | Native (Coco, shared standard) |
| .claude/skills/ | Claude Code |
| .codex/skills/ | Codex CLI |
| .gemini/skills/ | Gemini CLI |
| .opencode/skills/ | OpenCode |
Create your first skill:
coco skills create my-conventions
# → creates .agents/skills/my-conventions/SKILL.mdList all skills (including imported from other agents):
coco skills listSee Skills Guide for full documentation.
MCP Servers
MCP (Model Context Protocol) lets Coco use external tools: GitHub, databases, APIs, web search, and more.
Quick setup — create .mcp.json in your project root:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token" }
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/your/path"]
}
}
}This format is compatible with Claude Code, Cursor, and Windsurf — if you already have a .mcp.json, Coco reads it automatically.
Check MCP status inside the REPL:
/mcp list — show configured servers
/mcp status — show connected servers and available tools
/mcp health — run health check on all serversAuthenticate with environment variables (recommended — never hardcode tokens):
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}" }
},
"my-api": {
"url": "https://api.example.com/mcp",
"headers": { "Authorization": "Bearer ${MY_API_TOKEN}" }
}
}
}Set the variables in your shell environment (or in ~/.coco/.env for Coco-managed global secrets).
See MCP Guide for full documentation, authentication options, and troubleshooting.
Configuration
Project-level config in .coco.config.json and CLI-level config via coco config.
Example:
{
"name": "my-service",
"language": "typescript",
"quality": {
"minScore": 88,
"maxIterations": 8
}
}See:
Documentation
Development
git clone https://github.com/corbat-tech/coco
cd coco
pnpm install
pnpm build
pnpm test
pnpm check
pnpm check:releaseTech stack:
- TypeScript (ESM)
- Vitest
- oxlint / oxfmt
- Zod
- Commander
Release gate (pnpm check:release) runs the stable typecheck/lint/provider+agent suites used for release readiness.
Current Scope and Limitations
- CLI-first product; VS Code extension source is in
vscode-extension/. - Quality scores depend on project testability and model/tool quality.
- Provider behavior can vary by endpoint/model generation.
- Some advanced flows require external tooling (git, CI, MCP servers) to be installed/configured.
- No agent can guarantee zero regressions; Coco is designed to reduce risk with verification loops, not to remove it entirely.
Privacy
Coco sends prompts and selected context to the configured provider.
- Coco itself does not claim ownership of your code.
- Provider-side data handling depends on each provider policy.
- Local providers (
ollama,lmstudio) keep inference on your machine.
Contributing
- CONTRIBUTING.md
- Issues and proposals: GitHub Issues
License
MIT © Corbat
