cap-agentic
v0.1.0
Published
Agentic development framework that orchestrates specialized AI expert roles
Maintainers
Readme
cap
An orchestrator that turns a project description into shipped code by delegating to specialized AI experts — one question at a time, one commit at a time.
Status: early and evolving. I am the primary user. I dogfood cap on my own projects and rewrite parts of it whenever the flow frustrates me. If you try it and hit rough edges, they are real — please open an issue.
Heads-up: cap is token-hungry. The pipeline spins up many LLM calls per project: intake questions, parallel research agents, 2–3 expert consultations, a 5-member strategy board, planning, parallel plan reviews, plus a per-task execution loop with TDD. A full greenfield or medium-project run end-to-end (
cap start → execute → review).
Why cap exists
I kept building the same scaffolding around every AI coding session: a research step, a planning step, a review step, a "did we actually do what we said?" step. Tools like GSD (Get Shit Done), gstack, OmO, Superpowers, and BMAD each nailed part of it. cap is my attempt to collapse the parts I actually use into one opinionated pipeline that works across Claude Code and OpenCode.
The core bet: markdown prompts + a thin JS shell. Experts are markdown files you can read and edit. The JS layer is only there to run the state machine, enforce TDD, and stage atomic commits — nothing more.
Install
npm install -g cap-agentic
# or one-shot:
npx cap-agentic initRequires Node 20+. Verify with cap doctor.
60-second tour
cd my-project
cap init # writes .cap/, registers slash commands
cap start # interactive intake → research → plan
cap execute 1 # run phase 1 with TDD and atomic per-task commits
cap review 1 # full expert council reviews the delivered workEvery cap <verb> is also a /cap-<verb> slash command inside Claude Code or OpenCode. Pick whichever surface fits the moment.
If you want it to run without asking you anything:
cap maestro "what you're building"Autonomous mode resolves each decision via an expert autoresolve prompt and only stops on safety conditions (contradictory spec, blocked board verdict, explicit pause).
The pipeline in one picture
cap start cap execute N cap review N
────────── ───────────── ────────────
BROWNFIELD_SCAN Per task: Council of 18+ experts:
↓ manifest check Strategy → Design →
SPEC_QUESTIONS (Phase A intake) TDD gate Engineering → Quality
↓ dispatch ↓
REFINEMENT (gap-loop, strategy test run Synthesis verdict:
board for bigger scope) auto-commit GO / CONDITIONAL / BLOCK
↓ ↓
EXPERT_CONSULT (2–3 domain experts) Phase debate (opt-in)
↓ ↓
RESEARCH_1 (4 parallel agents) Human verification
↓ ↓
REVIEW_SPEC Phase complete
↓
DETAIL_QUESTIONS (Phase B intake)
↓
RESEARCH_2 (4 parallel agents)
↓
PLANNING (writes PLAN.md + plan.json per phase)
↓
EXPERT_REVIEW (council on plan, not code)
↓
READYNot every project hits every stage. The pipeline is scale-adaptive:
| Scale | What runs |
|---|---|
| bug-fix | SPEC_QUESTIONS → REVIEW_SPEC → PLANNING → EXPERT_REVIEW |
| small-feature | adds REFINEMENT, EXPERT_CONSULT, RESEARCH_1, DETAIL_QUESTIONS |
| medium-project | full pipeline |
| greenfield | full pipeline, BROWNFIELD_SCAN skipped (no code yet) |
The orchestrator proposes a scale at intake and you can override it.
For the stage-by-stage walkthrough, see docs/FLOW.md.
What makes cap different
Experts, not agents. Each expert is a markdown file with a crisp domain, explicit anti-patterns, and an input/output contract. The orchestrator invokes them — you can read and edit the same file they're reading. See EXPERTS.md for the full roster.
One question at a time. No walls of text. The intake is a conversation: ask, wait, record, next.
Atomic commits. cap execute commits exactly the files a task touched. It diffs git status before and after each task, so unrelated work you had in progress is never swept into the commit.
Gates everywhere, trust when you want. Every pipeline stage has a gate you can set to interactive, auto, or skip. --trust on cap start flips everything to auto for that session. cap maestro goes further — fully autonomous, with safety stops.
Markdown-first. All prompts, plans, requirements, and artifacts live as .md files under .cap/. Git-diffable, hand-editable, no database.
Adapter-agnostic. The same pipeline runs under Claude Code or OpenCode. Codex support is planned for v2.
Documentation
| File | Purpose | |---|---| | docs/FLOW.md | Stage-by-stage walkthrough of intake, research, planning, execution, and review | | docs/COMMANDS.md | Reference for every CLI command, flag, and slash command | | docs/ARCHITECTURE.md | How the thin JS layer, adapters, and state files fit together | | docs/PHILOSOPHY.md | Design principles and why cap looks the way it does | | EXPERTS.md | The 19 expert roles, their domains, and the review pipeline order | | CHANGELOG.md | Release notes |
Project layout after cap init
.cap/
├── PROJECT.md project vision, constraints, decisions
├── ROADMAP.md phase roadmap with confidence grades
├── REQUIREMENTS.md requirements with semantic IDs (FUNC-, SEC-, PERF-)
├── installed/experts/ 19 expert prompt files (read-only)
└── runtime/
├── state.json global project state
├── config.json gates, model profile, CFO budget
├── spec.json intake checkpoint (Phase A + B answers)
├── budget.json CFO token tracking
├── phases/NN/ per-phase PLAN.md, plan.json, RESEARCH.md, state.json
├── research/ research agent outputs
├── consultations/ saved expert consultations
└── logs/ raw intake logsSee docs/ARCHITECTURE.md for what each file contains and how they're produced.
Contributing
cap is open source under MIT. I develop it in the open and welcome issues, PRs, and honest "this flow confused me" feedback — that kind of friction report is genuinely the most useful thing.
git clone https://github.com/alessiopcc/cap
cd cap
npm install
npx vitest run # ~1000 tests
npx tsup # build
npx tsx src/cli/index.ts --help # run from sourceThe JS layer intentionally stays thin. If you're adding a feature, ask first: can this be an expert prompt instead of JS code?
License
MIT
