@namewta/specforge
v0.0.11
Published
AI-native spec-driven development workflow tool
Maintainers
Readme
SpecForge
AI-native spec-driven development workflow CLI — a synthesis of lessons from OpenSpec, gstack, superpowers, claude-task-master and Anthropic skills, re-forged into a single local CLI plus a repeatable workflow template.
Languages: English · 简体中文
Heritage: Built on the Shoulders of Five Projects
SpecForge does not reinvent spec-driven development — it internalizes and fuses the most battle-tested patterns from the open-source ecosystem into one coherent toolchain. SpecForge absorbs the methodology, not the implementation, from each project listed below.
| Source Project | What SpecForge Adopts |
|---|---|
| OpenSpec (Fission-AI) | Dual-directory model (.specforge/ framework assets + specforge/ user assets), artifact DAG (BLOCKED / READY / DONE), Profile system (minimal / standard / custom), dual-track surface of Commands + Skills |
| gstack (garrytan) | Preamble bootstrapping system (inline <!-- preamble:bash --> blocks), multi-perspective plan review, session-aware context collection |
| superpowers (obra) | Iron Laws hard gates, skill chaining / invocation, sub-agent-driven implementation, anti-evasion language, stress-test discipline |
| claude-task-master (eyaltoledano) | PRD → task decomposition pipeline, Zod-validated schemas, complexity analysis, structured response contracts |
| Anthropic skills | Progressive disclosure (L1 frontmatter → L2 body → L3 references/), skill-creator methodology, benchmark-driven authoring |
| flow-kit (rihebty) | Brownfield five-guard system (entry scan, architecture alignment, read/write boundary, pre-commit reconciliation, existing abstraction grep), context-reset protocol with PROGRESS artifact, three-tier project documentation (rules / structure / LESSONS), L3 load budget (≤ 150 lines), v0 draft gate, LESSONS nomination with L-NNN format, token cost transparency |
Also drawing on spec-kit for the constitution / extension-hooks pattern and on grill-me for multi-perspective interrogation.
SpecForge's job is to keep the good parts — artifact gating, progressive loading, profile tailoring, sub-agent hand-offs — and unify them behind one CLI so you get the benefits without adopting five separate tools.
What It Solves
Working with AI on real codebases, the friction is almost never "the model can't write it." It's:
- Blurred phase boundaries — requirements, design, implementation, QA and release collapse into one chat; agents skip steps
- Context bloat — every rule, style guide and SOP gets injected at once; hit rate drops, cost explodes
- No compounding memory — each project re-dictates the same team conventions from zero
- Fragmented tooling — Claude, Cursor, Kiro, Codex all want prompts in different shapes
SpecForge pins all of this to the filesystem. It generates .specforge/ (framework assets) and specforge/ (user assets) in your repo, encodes the 8-phase lifecycle, commands, skills, artifact dependencies and extension hooks as plain files, then lets AI agents advance through the workflow — while humans stay auditable, editable and reversible at every step.
Core Design
Dual-Directory Model (from OpenSpec)
.specforge/ # framework assets — regeneratable by `specforge update`
├── commands/ # workflow + tool commands
├── skills/ # 7 skill categories
├── templates/ # artifact templates (DESIGN.md / TASKS.md / PROGRESS.md)
└── config.yaml # framework-level machine source (context / rules / errors / handoffs / hooks)
specforge/ # user assets — source of truth, never auto-overwritten
├── config.yaml # project-level overrides / additions
├── spec/ # current specifications
├── brainstorming/ # brainstorm artifacts
├── context/ # three-tier docs (context.md / architecture.md / lessons.md)
├── changes/ # active changes
└── archive/ # completed changes8-Phase Lifecycle
foundation → requirements → design → planning → implementation → quality → release → evolution- Each phase owns a
workflow-commandand a canonical artifact - Phases are connected by an artifact DAG; missing prerequisites are blocked by a hard gate (
specforge status --check-requires) - Operations semantics (runbook / monitoring / rollback) are folded into
releaserather than a separate stage
Progressive Disclosure (from Anthropic skills)
| Level | Loaded When | Content | Budget |
|-------|-------------|---------|--------|
| L1 Always | Always | frontmatter (name / type / description) | description ≤ 200 chars |
| L2 On Trigger | Trigger keywords match | command / skill body | ≤ 500 lines |
| L3 On Demand | Explicit reference | references/, scripts/, templates/ | Must be linked from L2 |
Violations are caught by specforge doctor --check-disclosure.
Profile System (from OpenSpec)
| Profile | Enabled Phases | Use Case |
|---------|----------------|----------|
| minimal | foundation, requirements, implementation, quality, release (5) | Rapid prototyping / POC |
| standard | All 8 phases (default) | Production projects |
| custom | User-declared enabledPhases | Tailored combinations |
Quick Start
Requirements
- Node.js ≥ 24.14.1
- pnpm recommended (npm / yarn also work)
Install
# global
npm install -g @namewta/specforge
# or
pnpm add -g @namewta/specforge
# or run without installing
npx @namewta/specforge --versionInitialize
cd your-project
specforge init
# with a specific profile / project name
specforge init --profile standard --project-name my-appResult:
your-project/
├── .specforge/ # framework (updatable)
└── specforge/ # user-owned (source of truth)Advance Through the Lifecycle
Each workflow command lives at .specforge/commands/workflow/<phase>-<verb>/<phase>-<verb>.md. AI agents load it via @.specforge/commands/workflow/foundation-init/foundation-init.md.
# current phase state
specforge status
# artifact DAG
specforge status --graph
# prerequisite check for a phase
specforge status --phase design --check-requires
# list commands / skills (machine-readable)
specforge list --format json
specforge list --skills --triggers=test,qa
# refresh framework assets (user assets untouched)
specforge updateCLI Reference
| Command | Purpose |
|---------|---------|
| specforge init [path] | Bootstrap the dual directory. --profile, --enabled-phases, --project-name, --force |
| specforge add-command | Scaffold a command. --type workflow-command\|tool-command --name <kebab-case> |
| specforge add-skill <name> | Scaffold a skill. --type <domain-rule\|...>, --mode directory\|single-file |
| specforge list | List commands / skills. --commands, --skills, --type, --triggers, --format xml\|json\|text |
| specforge status | Current change's phase state. --phase, --check-requires, --graph, --json |
| specforge update [path] | Refresh .specforge/ (preserves specforge/). --force |
| specforge run-hook | Execute extensions.yaml hooks. --phase --stage before\|after --json |
| specforge profile show | Show current profile. --json |
| specforge profile set <name> | Switch profile, written to specforge/config.yaml. custom requires --enabled-phases |
| specforge doctor | Diagnostics. --check-node, --check-deps, --check-compat, --check-disclosure, --quiet |
Global: --no-color disables color; --version / -V prints version.
Concepts
Commands vs Skills (from OpenSpec's dual-track surface)
- Commands —
typeends with-command(workflow-command/tool-command/devflow-command/gitflow-command). Commands are actions, they advance phases. - Skills —
typedoes not end with-command(domain-rule/code-style/architecture-rule/testing-rule/security-rule/ui-ux-rule/workflow-step). Skills are context, they inject on trigger keywords.
Both share one 5-field frontmatter: name / type / description / version / author.
Artifact DAG
proposal ──► design ──► tasks ──► quality-report ──► archive ──► retrospective
▲
tasks depends on both proposal and designThree node states: DONE (file exists), READY (all requirements done), BLOCKED (at least one requirement outstanding). The graph detects cycles (three-color DFS) and rejects unknown / duplicate ids.
Extensions Hooks (from spec-kit)
Declare before_<phase> / after_<phase> hooks in .specforge/extensions.yaml. Workflow commands trigger them via specforge run-hook inside their preamble. Required hooks block on failure; optional: true hooks warn only.
hooks:
before_release:
- name: Security audit
command: pnpm audit
enabled: true
optional: true
timeoutMs: 60000Preamble (from gstack)
Commands and skills can embed <!-- preamble:bash ... --> blocks. When the agent loads the file it can parse and run the commands on demand to gather context:
<!-- preamble:bash
specforge list --skills --triggers=test,qa --format=json
specforge status --phase=quality --check-requires
specforge doctor --check-deps --quiet
-->Hard Gates (from superpowers / Iron Laws)
Each phase has executable constraints declared in templates/.specforge/config.yaml under rules.<phase>.hardGates:
requirements— unapproved proposals cannot enter designdesign— no contracts / error strategy means no planningimplementation— no production code before tests (TDD)quality— no new verification evidence means "done" is disallowedrelease— no runbook, no ship
Error Dictionary
E001_missingPrerequisiteArtifact, E002_unapprovedSolution, E003_contractMissing, E004_noVerificationEvidence, E005_contextOverload — all defined in templates/.specforge/config.yaml under errors so commands and skills can reference stable ids.
Development
pnpm install # install deps
pnpm dev -- init # run source directly via tsx (args after --)
pnpm test # unit + integration tests
pnpm lint # ESLint
pnpm format # Prettier
pnpm build # tsc + shebang injection
pnpm build:check-bin # verify dist/cli/index.js is executable
pnpm check # lint + test + build (also runs as prepublishOnly)Project layout:
src/
├── cli/ # Commander routes
├── commands/ # command impls (init / add-* / list / status / update / run-hook / profile / doctor / codebase-health / project-inventory)
├── services/ # business services (scaffold / command / skill / listing / status / update / hooks / health / inventory / lessons / design-explore / evolve / implementation)
├── core/ # domain (constants / lifecycle-types / profiles / artifact-graph / hooks / metadata-schema / disclosure-config / task-schema / ...)
├── utils/ # infra (fs / yaml / path / logger / template-renderer)
└── adapters/ # platform adapters (windows-filename-adapter)
templates/ # init templates (shipped with the npm package)
scripts/ # inject-shebang.mjs / verify-bin.mjs
tests/ # unit + integrationDetailed architecture and collaboration rules in AGENTS.md.
Release Pipeline
- GitHub Actions:
ci.yml(push / PR) +release.yml(triggered byv*tags) - Flow: setup → lint → test → build → verify-bin →
npm publish --provenance --access public→ GitHub Release (softprops/action-gh-release@v2) - Rule:
package.jsonversion must match the git tag (minus thevprefix) - Dependabot scans npm and GitHub Actions deps weekly
Token Cost Budget
The ranges below are methodology-level workload estimates, not precise measurements; actual consumption varies with model, session length, codebase size, probe hit rate and other factors.
Scale Tiers
| Change Scale | Lines of Code | Estimated Token Range (one full lifecycle) | Typical Scenario | |---|---|---|---| | Small change | < 100 lines | ~20k – 60k tokens | Single-point bugfix / minor feature addition | | Medium change | 100 – 500 lines | ~80k – 200k tokens | Single-module feature / partial refactor | | Medium-large change | 500 – 1500 lines | ~250k – 600k tokens | New service + several commands | | Large milestone | 1500+ lines | 600k+ tokens (consider splitting) | Cross-cutting overhaul (e.g. flow-kit integration) |
When to Use SpecForge
- ✅ The change affects contracts across ≥ 2 modules
- ✅ There are decisions or rules that need to be captured as project-level knowledge
- ✅ The team needs an audit trail (which proposals were rejected and why)
- ✅ A brownfield project is adopting AI collaboration for the first time (run project-inventory first)
- ✅ Cross-phase artifact hand-offs are required (proposal → design → tasks → quality)
When NOT to Use SpecForge
- ❌ One-off typo fixes / formatting / minor dependency bumps
- ❌ Hotfixes requiring ≤ 5 lines of code
- ❌ Throwaway exploratory scripts
- ❌ Time-pressured scenarios where delivery quality is not critical (accept the cost of missing LESSONS)
- ❌ Repetitive tasks with mature SOPs that generate no new knowledge
Six Habits to Save Tokens
- Load inventory.md / context.md first: Avoid re-introducing the project every session — let the agent jump straight into work
- Respect write_files boundaries: Crossing boundaries causes the AI to continuously expand its context window, with costs growing exponentially
- Use v0 drafts: A 500-word directional alignment is far cheaper than tearing down a detailed DESIGN and starting over
- Observe the L3 load budget (≤ 150 lines): If content exceeds the budget, move it to
references/for on-demand loading - Drop a PROGRESS note before clearing the session: Prevents the agent from re-attempting already-eliminated approaches after recovery (triggers E010)
- Run codebase-health regularly: Write dead code and unused dependencies into the no-touch list to reduce accidental AI modifications
Documentation
AGENTS.md— AI agent collaboration guide (mandatory for contributors using AI)CHANGELOGS.md— version historyREADME-ZH.md— Chinese version of this README
Acknowledgements
SpecForge stands on the shoulders of:
- OpenSpec by Fission-AI — dual-directory model, artifact DAG, profiles
- gstack by garrytan — preamble bootstrapping, multi-perspective review
- superpowers by obra — Iron Laws, skill chaining, sub-agent-driven development
- claude-task-master by eyaltoledano — PRD → tasks, complexity analysis
- skills by Anthropic — progressive disclosure, skill-creator methodology
- flow-kit by rihebty — Brownfield guards, context-reset protocol, three-tier documentation, token cost transparency
Thanks to the authors of every project listed above — their prior work is the reason this CLI could be built in weeks rather than months.
License
MIT © namewta
