npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@zigrivers/scaffold

v2.38.1

Published

AI-powered software project scaffolding pipeline

Downloads

420

Readme

Scaffold

A TypeScript CLI that assembles AI-powered prompts at runtime to guide you from "I have an idea" to working software. Scaffold walks you through 60 structured pipeline steps — organized into 16 phases — plus 7 utility tools, and Claude Code handles the research, planning, and implementation for you.

By the end, you'll have a fully planned, standards-documented, implementation-ready project with working code.

What is Scaffold?

Scaffold is a composable meta-prompt pipeline built for Claude Code, Anthropic's command-line coding tool. If you have an idea for a software project but don't know where to start — or you want to make sure your project is set up with solid architecture, standards, and tests from day one — Scaffold guides you through every step.

Here's how it works:

  1. Initialize — run scaffold init in your project directory. The init wizard detects whether you're starting fresh (greenfield) or working with an existing codebase (brownfield), and lets you pick a methodology preset (deep, mvp, or custom).

  2. Run steps — each step is a composable meta-prompt (a short intent declaration in pipeline/) that gets assembled at runtime into a full 7-section prompt. The assembly engine injects relevant knowledge base entries, project context from prior steps, methodology settings, and depth-appropriate instructions.

  3. Follow the dependency graph — Scaffold tracks which steps are complete, which are eligible, and which are blocked. Run scaffold next to see what's unblocked, or scaffold status for the full picture. Each step produces a specific artifact — a planning document, architecture decision, specification, or actual code.

You can run steps two ways:

  • CLI: scaffold run create-prd — the assembly engine builds a full prompt from the meta-prompt, knowledge base entries, and project context. Best for the structured pipeline with dependency tracking.
  • Slash commands: /scaffold:create-prd in Claude Code — uses pre-rendered, self-contained prompts. Best for quick access to individual commands without the full pipeline ceremony.

Either way, Scaffold constructs the prompt and Claude does the work. The CLI tracks pipeline state and dependencies; slash commands are fire-and-forget.

Key Concepts

Meta-prompts — Each pipeline step is defined as a short .md file in pipeline/ with YAML frontmatter (dependencies, outputs, knowledge entries) and a markdown body describing the step's intent. These are not the prompts Claude sees — they're assembled into full prompts at runtime.

Assembly engine — At execution time, Scaffold builds a 7-section prompt from: system metadata, the meta-prompt, knowledge base entries, project context (artifacts from prior steps), methodology settings, layered instructions, and depth-specific execution guidance.

Knowledge base — 60 domain expertise entries in knowledge/ organized in seven categories (core, product, review, validation, finalization, execution, tools) covering testing strategy, domain modeling, API design, security best practices, eval craft, TDD execution, task claiming, worktree management, release management, and more. These get injected into prompts based on each step's knowledge-base frontmatter field. Knowledge files with a ## Deep Guidance section are optimized for CLI assembly — only the deep guidance content is loaded, avoiding redundancy with the prompt text. Teams can add project-local overrides in .scaffold/knowledge/ that layer on top of the global entries.

Methodology presets — Three built-in presets control which steps run and how deep the analysis goes:

  • deep (depth 5) — all steps enabled, exhaustive analysis
  • mvp (depth 1) — 7 critical steps, get to code fast
  • custom (depth 1-5) — you choose which steps to enable and how deep each one goes

Depth scale (1-5) — Controls how thorough each step's output is, from "focus on the core deliverable" (1) to "explore all angles, tradeoffs, and edge cases" (5). Depth resolves with 4-level precedence: CLI flag > step override > custom default > preset default.

Multi-model validation — At depth 4-5, all 19 review and validation steps can dispatch independent reviews to Codex and/or Gemini CLIs. Two independent models catch more blind spots than one. When both CLIs are available, findings are reconciled by confidence level (both agree = high confidence, single model P0 = still actionable). Auth is verified before every dispatch (codex login status, NO_BROWSER=true gemini -p "respond with ok"). See the Multi-Model Review section.

State management — Pipeline progress is tracked in .scaffold/state.json with atomic file writes and crash recovery. An advisory lock prevents concurrent runs. Decisions are logged to an append-only decisions.jsonl.

Dependency graph — Steps declare their prerequisites in frontmatter. Scaffold builds a DAG, runs topological sort (Kahn's algorithm), detects cycles, and computes which steps are eligible at any point.

Prerequisites

Required

Node.js (v18 or later)

  • Install: https://nodejs.org or brew install node
  • Verify: node --version

Git

  • Install: https://git-scm.com or brew install git
  • Verify: git --version

Claude Code The AI coding assistant that runs the assembled prompts. Claude Code is a command-line tool from Anthropic.

  • Install: npm install -g @anthropic-ai/claude-code
  • Verify: claude --version
  • Docs: https://docs.anthropic.com/en/docs/claude-code

Optional

Codex CLI (for multi-model review) Independent code review from a different AI model. Used at depth 4-5 by all review steps.

  • Install: npm install -g @openai/codex
  • Requires: ChatGPT subscription (Plus/Pro/Team)
  • Verify: codex --version

Gemini CLI (for multi-model review) Independent review from Google's model. Can run alongside or instead of Codex.

  • Install: npm install -g @google/gemini-cli
  • Requires: Google account (free tier available)
  • Verify: gemini --version

Playwright MCP (web apps only) Lets Claude control a real browser for visual testing and screenshots.

  • Install: claude mcp add playwright npx @playwright/mcp@latest

Installation

Scaffold has two parts that install separately:

  • CLI (scaffold) — the core tool. Install via npm or Homebrew. Use it from your terminal or from Claude Code with ! scaffold run <step>.
  • Plugin (/scaffold:) — optional slash commands for Claude Code. Lets you type /scaffold:create-prd instead of ! scaffold run create-prd.

Step 1: Install the CLI

Pick one:

npm (recommended)

npm install -g @zigrivers/scaffold

Homebrew

brew tap zigrivers/scaffold
brew install scaffold

Verify: scaffold version

Step 2: Add the plugin (recommended)

Install the Scaffold plugin inside Claude Code for slash commands AND the interactive runner skill:

/plugin marketplace add zigrivers/scaffold
/plugin install scaffold@zigrivers-scaffold

This gives you:

  • Slash commands (/scaffold:create-prd, /scaffold:tdd, etc.) — quick access to any pipeline step
  • Scaffold Runner skill — intelligent interactive wrapper that surfaces decision points (depth level, strictness, optional sections) before execution instead of letting Claude pick defaults silently
  • Pipeline reference skill — shows pipeline ordering, dependencies, and completion status
  • Multi-model dispatch skill — correct invocation patterns for Codex and Gemini CLIs

Usage — just tell Claude Code what you want in natural language:

"Run the next scaffold step"          → previews prompt, asks decisions, executes
"Run scaffold create-prd"             → same for a specific step
"Where am I in the pipeline?"         → shows progress and next eligible steps
"What's left?"                        → compact view of remaining steps only
"Skip design-system and add-e2e-testing"  → batch skip with reason
"Is add-e2e-testing applicable?"      → checks platform detection without running
"Use depth 3 for everything"          → remembers preference for the session

The plugin is optional — everything it does can also be done with scaffold run <step> from the CLI. But you lose the interactive decision surfacing without the Scaffold Runner skill.

CLI-only users: If you prefer not to install the plugin, add skills with one command:

scaffold skill install

This copies the Scaffold Runner, Pipeline Reference, and Multi-Model Dispatch skills to .claude/skills/ in your project.

Updating

npm

npm update -g @zigrivers/scaffold

Homebrew

brew upgrade scaffold

Plugin

/scaffold:update

Or: /plugin marketplace update zigrivers-scaffold

Existing projects

After upgrading the CLI, existing projects migrate automatically. Run scaffold status in your project directory — the state manager detects and renames old step keys, removes retired steps, normalizes artifact paths, and persists the changes atomically. No manual editing of .scaffold/state.json is needed.

Step migrations handled automatically:

  • add-playwright / add-maestroadd-e2e-testing
  • multi-model-reviewautomated-pr-review
  • user-stories-multi-model-review → removed (folded into review-user-stories)
  • claude-code-permissions → removed (folded into git-workflow + tech-stack)
  • multi-model-review-tasks → removed (folded into implementation-plan-review)
  • testing-strategytdd, implementation-tasksimplementation-plan, review-tasksimplementation-plan-review

The PRD is always created as docs/plan.md. If you have a legacy docs/prd.md from an older version, the context gatherer resolves aliased paths so downstream steps find your PRD regardless.

Quick Start

The fastest way to use Scaffold is through natural language inside Claude Code. The Scaffold Runner skill handles pipeline navigation, surfaces decision points before Claude picks defaults, and tracks your progress automatically. The examples below show what you'd type in a Claude Code session.

Starting a Brand New Project

Let's say you want to build a neighborhood tool lending library — an app where neighbors can list tools they own and borrow from each other. Here's how that looks end to end.

Set up the project (one-time, in your terminal):

mkdir tool-library && cd tool-library
git init
scaffold init

The init wizard detects that this is a brand new project and asks you to pick a methodology. Choose mvp if you want to get to working code fast — it runs only 7 critical steps instead of the full 60. You can always switch to deep or custom later.

Open Claude Code in your project directory, then start talking:

"I want to build a neighborhood tool lending library where neighbors can
list tools they own, browse what's available nearby, and request to borrow
items. Run the first scaffold step."

The runner picks up create-vision (the first eligible step), asks you a few strategic questions about your idea — who's the audience, what makes this different from existing apps, what does success look like — and produces docs/vision.md. This becomes the foundation everything else builds on.

"Run the next scaffold step"

Now it runs create-prd. Claude translates your vision into a detailed product requirements document with features, user personas, success criteria, and scope boundaries. The output lands in docs/plan.md.

"Next step"

review-prd — Claude reviews the PRD for gaps, ambiguity, and completeness, then suggests improvements. You decide which suggestions to accept.

"Keep going"

user-stories — Claude breaks the PRD into detailed user stories with acceptance criteria. Each story maps back to a specific requirement so nothing falls through the cracks.

"What's left?"

The runner shows your remaining steps and which ones are unblocked. With the mvp preset, you're almost there — just review-user-stories, tdd, implementation-plan, and implementation-playbook remain.

"Finish the remaining steps"

The runner executes each remaining step in order, pausing to surface decisions that need your input (testing framework preferences, depth level for reviews, etc.) rather than letting Claude guess silently.

Once the pipeline is complete:

"Start building"

Claude picks up the first implementation task and begins writing code using TDD — tests first, then implementation. Your project now has architecture docs, coding standards, a test strategy, and a task graph, all produced from your original idea.

CLI equivalent: Everything above can also be done with scaffold run create-vision, scaffold run create-prd, scaffold next, etc. The runner skill adds interactive decision surfacing on top of these commands.

Adding Scaffold to an Existing Project

Say you have a Next.js app with a handful of features built, but no documentation, formal test strategy, or architecture docs. Scaffold can backfill all of that.

In your project root:

cd ~/projects/my-nextjs-app
scaffold init

Scaffold detects that you already have code (package.json, source files, git history) and classifies the project as brownfield. It suggests the deep methodology since existing projects benefit from thorough documentation, but you can choose any preset.

If you already have docs that match Scaffold's expected outputs (a PRD, architecture doc, etc.), bootstrap your state:

scaffold adopt

This scans your project for existing artifacts and marks those pipeline steps as complete so you don't redo work.

Now open Claude Code and skip what doesn't apply:

"Skip create-vision and create-prd — I already know what I'm building"

The runner marks those steps as skipped with your reason logged.

"Run tech-stack"

Claude scans your existing dependencies, framework choices, and configuration, then documents everything in docs/tech-stack.md — formalizing decisions you've already made so future contributors (and AI agents) understand the rationale.

"Run tdd"

Claude sets up a testing strategy tailored to your existing stack — test runner config, coverage targets, TDD workflow conventions. If you already have some tests, it builds around them.

"Run coding-standards"

Claude analyzes your existing code patterns and creates docs/coding-standards.md with linter and formatter configs that match how you're already writing code.

Continue through whatever steps make sense — git-workflow, security, implementation-plan — and skip the rest.

Later, when you want to add a new feature with full Scaffold rigor:

"Run new-enhancement"

Claude walks you through adding a feature the right way — updating the PRD, creating new user stories, setting up tasks with dependencies, and kicking off implementation. All the planning docs stay in sync.

Checking Your Progress

Scaffold persists your pipeline state in .scaffold/state.json, so you can close Claude Code, take a break, and pick up right where you left off.

In Claude Code (natural language):

"Where am I in the pipeline?"    → full progress view with phase breakdown
"What's next?"                   → shows the next unblocked step(s)
"What's left?"                   → compact view of remaining steps only

From the terminal (CLI):

scaffold status              # full pipeline progress
scaffold status --compact    # remaining work only
scaffold next                # next eligible step(s)
scaffold dashboard           # open a visual progress dashboard in your browser

Tips for New Users

  • You don't need every step. The mvp preset runs just 7 steps and gets you building fast. Start there and switch to deep or custom if you want more rigor.
  • "I'm not sure" is a valid answer. When Claude asks a question you can't answer yet, say so — it'll suggest reasonable defaults and explain the trade-offs. You can revisit any decision later.
  • You can re-run any step. If your thinking evolves, use scaffold reset <step> to reset it, then run it again. Scaffold uses update mode — it improves the existing artifact rather than starting from scratch.
  • Every step produces a real document. Vision docs, PRDs, architecture decisions, test strategies — these all land in your project's docs/ folder as markdown files. They're not throwaway; they're the source of truth your code is built from.
  • The pipeline is a guide, not a cage. Skip steps that don't apply (scaffold skip <step> --reason "..."). Run them out of order if you know what you're doing. Scaffold tracks dependencies so it'll tell you if you're missing a prerequisite.
  • Depth controls thoroughness. Each step runs at a depth from 1 (focused, fast) to 5 (exhaustive). The mvp preset defaults to depth 1; deep defaults to 5. You can override per step or per session: "Use depth 3 for everything".

The Pipeline

Phase 0 — Product Vision (vision)

You describe your idea and Claude turns it into a strategic vision document covering who it's for, what makes it different, and what success looks like. The review step stress-tests the vision for gaps, and the innovate step explores market positioning opportunities. Without this, later steps lack a clear North Star and features drift.

| Step | What It Does | |------|-------------| | create-vision | Claude asks about your idea — who it's for, what problem it solves, what makes it different — and produces a vision document with elevator pitch, target audience, competitive positioning, guiding principles, and success criteria. | | review-vision | Claude stress-tests the vision across five dimensions — clarity, audience precision, competitive rigor, strategic coherence, and whether the PRD can be written from it without ambiguity — and fixes what it finds. | | innovate-vision | Claude explores untapped opportunities — adjacent markets, AI-native capabilities, ecosystem partnerships, and contrarian positioning — and proposes innovations for your approval. (optional) |

Phase 1 — Product Definition (pre)

Claude translates your vision into a detailed product requirements document (PRD) with features, user personas, constraints, and success criteria. Then it breaks the PRD into user stories — specific things users can do, each with testable acceptance criteria in Given/When/Then format. Review and innovation steps audit for gaps and suggest enhancements. Without this, you're building without a spec.

| Step | What It Does | |------|-------------| | create-prd | Claude translates your vision (or idea, if no vision exists) into a product requirements document with problem statement, user personas, prioritized feature list, constraints, non-functional requirements, and measurable success criteria. | | innovate-prd | Claude analyzes the PRD for feature-level gaps — competitive blind spots, UX enhancements, AI-native possibilities — and proposes additions for your approval. (optional) | | review-prd | Claude reviews the PRD across eight passes — problem rigor, persona coverage, feature scoping, success criteria, internal consistency, constraints, non-functional requirements — and fixes blocking issues. | | user-stories | Claude breaks every PRD feature into user stories ("As a [persona], I want [action], so that [outcome]") organized by epic, each with testable acceptance criteria in Given/When/Then format. | | innovate-user-stories | Claude identifies UX enhancement opportunities — progressive disclosure, smart defaults, accessibility improvements — and integrates approved changes into existing stories. (optional) | | review-user-stories | Claude verifies every PRD feature maps to at least one story, checks that acceptance criteria are specific enough to test, validates story independence, and builds a requirements traceability index at higher depths. |

Phase 2 — Project Foundation (foundation)

Claude researches and documents your technology choices (language, framework, database) with rationale, creates coding standards tailored to your stack with actual linter configs, defines your testing strategy and test pyramid, and designs a directory layout optimized for parallel AI agent work. Without this, agents guess at conventions and produce inconsistent code.

| Step | What It Does | |------|-------------| | beads | Sets up Beads task tracking with a lessons-learned file for cross-session learning, and creates the initial CLAUDE.md skeleton with core principles and workflow conventions. (optional) | | tech-stack | Claude researches technology options for your project — language, framework, database, hosting, auth — evaluates each against your requirements, and documents every choice with rationale and alternatives considered. | | coding-standards | Claude creates coding standards tailored to your tech stack — naming conventions, error handling patterns, import organization, AI-specific rules — and generates working linter and formatter config files. | | tdd | Claude defines your testing approach — which types of tests to write at each layer, coverage targets, what to mock and what not to, test data patterns — so agents write the right tests from the start. | | project-structure | Claude designs a directory layout optimized for parallel AI agent work (minimizing file conflicts), documents where each type of file belongs, and creates the actual directories in your project. |

Phase 3 — Development Environment (environment)

Claude sets up your local dev environment with one-command startup and live reload, creates a design system with color palette, typography, and component patterns (web apps only), configures your git branching strategy with CI pipeline and worktree scripts for parallel agents, optionally sets up automated PR review with multi-model validation, and configures AI memory so conventions persist across sessions. Without this, you're manually configuring tooling instead of building.

| Step | What It Does | |------|-------------| | dev-env-setup | Claude configures your project so make dev (or equivalent) starts everything — dev server with live reload, local database, environment variables — and documents the setup in a getting-started guide. | | design-system | Claude creates a visual language — color palette (WCAG-compliant), typography scale, spacing system, component patterns — and generates working theme config files for your frontend framework. (web apps only) | | git-workflow | Claude sets up your branching strategy, commit message format, PR workflow, CI pipeline with lint and test jobs, and worktree scripts so multiple AI agents can work in parallel without conflicts. | | automated-pr-review | Claude configures automated code review — using Codex and/or Gemini CLIs for dual-model review when available, or an external bot — with severity definitions and review criteria tailored to your project. (optional) | | ai-memory-setup | Claude extracts conventions from your docs into path-scoped rule files that load automatically, optimizes CLAUDE.md with a pointer pattern, and optionally sets up persistent cross-session memory. |

Phase 4 — Testing Integration (integration)

Claude auto-detects your platform (web or mobile) and configures end-to-end testing — Playwright for web apps, Maestro for mobile/Expo. Skips automatically for backend-only projects. Without this, your test pyramid has no top level.

| Step | What It Does | |------|-------------| | add-e2e-testing | Claude detects whether your project is web or mobile, then configures Playwright (web) or Maestro (mobile) with a working smoke test, baseline screenshots, and guidance on when to use E2E vs. unit tests. (optional) |

Phase 5 — Domain Modeling (modeling)

Claude analyzes your user stories to identify all the core concepts in your project — the entities (things like Users, Orders, Tools), their relationships, the rules that must always be true, and the events that happen when state changes. This becomes the shared language between all your docs and code. Without this, different docs use different names for the same concept and agents create duplicate logic.

| Step | What It Does | |------|-------------| | domain-modeling | Claude analyzes your user stories to identify the core concepts in your project (entities, their relationships, the rules that must always hold true), and establishes a shared vocabulary that all docs and code will use. | | review-domain-modeling | Claude verifies every PRD feature maps to a domain entity, checks that business rules are enforceable, and ensures the shared vocabulary is consistent across all project files. |

Phase 6 — Architecture Decisions (decisions)

Claude documents every significant technology and design decision as an Architecture Decision Record (ADR) — what was decided, what alternatives were considered, and why. The review catches contradictions and missing decisions. Without this, future contributors (human or AI) don't know why things are the way they are.

| Step | What It Does | |------|-------------| | adrs | Claude documents every significant design decision — what was chosen, what alternatives were considered with pros and cons, and what consequences follow — so future contributors understand why, not just what. | | review-adrs | Claude checks for contradictions between decisions, missing decisions implied by the architecture, and whether every choice has honest trade-off analysis. |

Phase 7 — System Architecture (architecture)

Claude designs the system blueprint — which components exist, how data flows between them, where each piece of code lives, and how the system can be extended. This translates your domain model and decisions into a concrete structure that implementation will follow. Without this, agents make conflicting structural assumptions.

| Step | What It Does | |------|-------------| | system-architecture | Claude designs the system blueprint — which components exist, how data flows between them, where each module lives in the directory tree, and where extension points allow custom behavior. | | review-architecture | Claude verifies every domain concept lands in a component, every decision constraint is respected, no components are orphaned from data flows, and the module structure minimizes merge conflicts. |

Phase 8 — Specifications (specification)

Claude creates detailed interface specifications for each layer of your system. Database schema translates domain entities into tables with constraints that enforce business rules. API contracts define every endpoint with request/response shapes, error codes, and auth requirements. UX spec maps out user flows, interaction states, accessibility requirements, and responsive behavior. Each is conditional — only generated if your project has that layer. Without these, agents guess at interfaces and implementations don't align.

| Step | What It Does | |------|-------------| | database-schema | Claude translates your domain model into database tables with constraints that enforce business rules, indexes optimized for your API's query patterns, and a reversible migration strategy. (if applicable) | | review-database | Claude verifies every domain entity has a table, constraints enforce business rules at the database level, and indexes cover all query patterns from the API contracts. (if applicable) | | api-contracts | Claude specifies every API endpoint — request/response shapes, error codes with human-readable messages, auth requirements, pagination, and example payloads — so frontend and backend can be built in parallel. (if applicable) | | review-api | Claude checks that every domain operation has an endpoint, error responses include domain-specific codes, and auth requirements are specified for every route. (if applicable) | | ux-spec | Claude maps out every user flow with all interaction states (loading, error, empty, populated), defines accessibility requirements (WCAG level, keyboard nav), and specifies responsive behavior at each breakpoint. (if applicable) | | review-ux | Claude verifies every user story has a flow, accessibility requirements are met, all error states are documented, and the design system is used consistently. (if applicable) |

Phase 9 — Quality (quality)

Claude reviews your testing strategy for coverage gaps, generates test skeleton files from your user story acceptance criteria (one test per criterion, ready for TDD), creates automated eval checks that verify code meets your documented standards, designs your deployment pipeline with monitoring and incident response, and conducts a security review covering OWASP Top 10, threat modeling, and input validation rules. Without this, quality is an afterthought bolted on at the end.

| Step | What It Does | |------|-------------| | review-testing | Claude audits the testing strategy for coverage gaps by layer, verifies edge cases from domain invariants are tested, and checks that test environment assumptions match actual config. | | story-tests | Claude generates a test skeleton file for each user story — one pending test case per acceptance criterion, tagged with story and criterion IDs — giving agents a TDD starting point for every feature. | | create-evals | Claude generates automated checks that verify your code matches your documented standards — file placement, naming conventions, feature-to-test coverage, API contract alignment, and more — using your project's own test framework. | | operations | Claude designs your deployment pipeline (build, test, deploy, verify, rollback), defines monitoring metrics with alert thresholds, and writes incident response procedures with rollback instructions. | | review-operations | Claude verifies the full deployment lifecycle is documented, monitoring covers latency/errors/saturation, alert thresholds have rationale, and common failure scenarios have runbook entries. | | security | Claude conducts a security review of your entire system — OWASP Top 10 coverage, input validation rules for every user-facing field, data classification, secrets management, CORS policy, rate limiting, and a threat model covering all trust boundaries. | | review-security | Claude verifies OWASP coverage is complete, auth boundaries match API contracts, every secret is accounted for, and the threat model covers all trust boundaries. Highest priority for multi-model review. |

Phase 10 — Platform Parity (parity)

For projects targeting multiple platforms (web + mobile, for example), Claude audits all documentation for platform-specific gaps — features that work on one platform but aren't specified for another, input pattern differences, and platform-specific testing coverage. Skips automatically for single-platform projects.

| Step | What It Does | |------|-------------| | platform-parity-review | Claude audits all documentation for platform-specific gaps — features missing on one platform, input pattern differences (touch vs. mouse), and platform-specific testing coverage. (multi-platform only) |

Phase 11 — Consolidation (consolidation)

Claude optimizes your CLAUDE.md to stay under 200 lines with critical patterns front-loaded, then audits all workflow documentation for consistency — making sure commit formats, branch naming, PR workflows, and key commands match across every doc. Without this, agents encounter conflicting instructions.

| Step | What It Does | |------|-------------| | claude-md-optimization | Claude removes redundancy from CLAUDE.md, fixes terminology inconsistencies, front-loads critical patterns (TDD, commit format, worktrees), and keeps it under 200 lines so agents actually read and follow it. | | workflow-audit | Claude audits every document that mentions workflow (CLAUDE.md, git-workflow, coding-standards, dev-setup) and fixes any inconsistencies in commit format, branch naming, PR steps, or key commands. |

Phase 12 — Planning (planning)

Claude decomposes your user stories and architecture into concrete, implementable tasks — each scoped to ~150 lines of code, limited to 3 files, with clear acceptance criteria and no ambiguous decisions for agents to guess at. The review validates coverage (every feature has tasks), checks the dependency graph for cycles, and runs multi-model validation at higher depths. Without this, agents don't know what to build or in what order.

| Step | What It Does | |------|-------------| | implementation-plan | Claude breaks your user stories and architecture into concrete tasks — each scoped to ~150 lines of code and 3 files max, with clear acceptance criteria, no ambiguous decisions, and explicit dependencies. | | implementation-plan-review | Claude verifies every feature has implementation tasks, no task is too large for one session, the dependency graph has no cycles, and every acceptance criterion maps to at least one task. |

Phase 13 — Validation (validation)

Seven cross-cutting audits that catch problems before implementation begins. Without this phase, hidden spec problems surface during implementation as expensive rework.

| Step | What It Does | |------|-------------| | scope-creep-check | Claude compares everything that's been specified against the original PRD and flags anything that wasn't in the requirements — features, components, or tasks that crept in without justification. | | dependency-graph-validation | Claude verifies the task dependency graph has no cycles (which would deadlock agents), no orphaned tasks, and no chains deeper than three sequential dependencies. | | implementability-dry-run | Claude simulates picking up each task as an implementing agent and flags anything ambiguous — unclear acceptance criteria, missing input files, undefined error handling — that would force an agent to guess. | | decision-completeness | Claude checks that every technology choice and architectural pattern has a recorded decision with rationale, and that no two decisions contradict each other. | | traceability-matrix | Claude builds a map showing that every PRD requirement traces through to user stories, architecture components, implementation tasks, and test cases — with no gaps in either direction. | | cross-phase-consistency | Claude traces every named concept (entities, fields, API endpoints) across all documents and flags any naming drift, terminology mismatches, or data shape inconsistencies. | | critical-path-walkthrough | Claude walks the most important user journeys end-to-end across every spec layer — PRD to stories to UX to API to database to tasks — and flags any broken handoffs or missing layers. |

Phase 14 — Finalization (finalization)

Claude applies all findings from the validation phase, freezes documentation (ready for implementation), creates a developer onboarding guide (the "start here" document for anyone joining the project), and writes the implementation playbook — the operational document agents reference during every coding session. Without this, there's no bridge between planning and building.

| Step | What It Does | |------|-------------| | apply-fixes-and-freeze | Claude applies all findings from the validation phase, fixes blocking issues, and freezes every document with a version marker — signaling that specs are implementation-ready. | | developer-onboarding-guide | Claude synthesizes all your frozen docs into a single onboarding narrative — project purpose, architecture overview, top coding patterns, key commands, and a quick-start checklist — so anyone joining the project knows exactly where to begin. | | implementation-playbook | Claude writes the playbook agents reference during every coding session — task execution order, which docs to read before each task, the TDD loop to follow, quality gates to pass, and the handoff format between agents. |

Phase 15 — Build (build)

Stateless execution steps that can be run repeatedly once Phase 14 is complete. Single-agent and multi-agent modes start the TDD implementation loop (claim a task, write a failing test, make it pass, refactor, commit, repeat). Resume commands restore session context after breaks. Quick-task handles one-off bug fixes outside the main plan. New-enhancement adds a feature with full planning rigor.

| Step | What It Does | |------|-------------| | single-agent-start | Claude claims the next task, writes a failing test, implements until it passes, refactors, runs quality gates, commits, and repeats — following the implementation playbook. | | single-agent-resume | Claude recovers context from the previous session — reads lessons learned, checks git state, reconciles merged PRs — and continues the TDD loop from where you left off. | | multi-agent-start | Claude sets up a named agent in an isolated git worktree so multiple agents can implement tasks simultaneously without file conflicts, each following the same TDD loop. | | multi-agent-resume | Claude verifies the worktree, syncs with main, reconciles completed tasks, and resumes the agent's TDD loop from the previous session. | | quick-task | Claude takes a one-off request (bug fix, refactor, performance tweak) and creates a single well-scoped task with acceptance criteria and a test plan — for work outside the main implementation plan. | | new-enhancement | Claude walks you through adding a feature the right way — updating the PRD, creating new user stories, running an innovation pass, and generating implementation tasks that integrate with your existing plan. |

Multi-Model Review

Just like you'd want more than one person reviewing a pull request, multi-model review gets independent perspectives from different AI models. When Claude, Codex, and Gemini independently flag the same issue, you know it's real. When they all approve, you can proceed with confidence.

Why Multiple Models?

  • Different blind spots — what Claude considers correct, another model may flag as problematic. Each model reasons differently about architecture, security, and edge cases.
  • Independent review — each model reviews your work without seeing what the others said, preventing groupthink.
  • Confidence through agreement — when two or three models flag the same issue, it's almost certainly real. When they all approve, you can move forward confidently.
  • Catches what single-model misses — security gaps, inconsistent naming across docs, missing edge cases, and specification contradictions that one model overlooks.

Quick Setup

Multi-model review is optional. It requires installing one or both of these additional CLI tools:

Codex CLI — OpenAI's command-line coding tool. Requires a ChatGPT subscription (Plus/Pro/Team).

npm install -g @openai/codex

Gemini CLI — Google's command-line coding tool. Free tier available with a Google account.

npm install -g @google/gemini-cli

You don't need both — Scaffold works with whichever CLIs are available. Having both gives the strongest review (three independent perspectives). See Prerequisites for auth setup and verification commands.

How It Works

  1. Claude reviews first — completes its own structured multi-pass review using different review lenses (coverage, consistency, quality, downstream readiness)
  2. Independent external review — the document being reviewed is sent to each available CLI. They don't see Claude's findings or each other's output — every review is independent.
  3. Findings are reconciled — Scaffold merges all findings by confidence level:

| Scenario | Confidence | Action | |----------|-----------|--------| | Both models flag the same issue | High | Fix immediately | | Both models approve | High | Proceed confidently | | One flags P0, other approves | High | Fix it (P0 is critical) | | One flags P1, other approves | Medium | Review before fixing | | Models contradict each other | Low | Present both to user |

Scaffold verifies CLI authentication before every dispatch. If a token has expired, it tells you and provides the command to re-authenticate — it never silently skips a review.

When It Runs

Multi-model review activates automatically at depth 4-5 during any review or validation step — that's 20 steps in total, including all domain reviews (review-prd, review-architecture, review-security, etc.) and all 7 validation checks (traceability, scope creep, implementability, etc.).

At depth 1-3, reviews are Claude-only — still thorough with multiple passes, but single-perspective. You control depth globally during scaffold init, per session ("Use depth 5 for everything"), or per step ("Run review-security at depth 5").

What You Need

  • Depth 4 or 5 — set during scaffold init or override per step
  • At least one additional CLI — Codex or Gemini (or both for triple-model review)
  • Valid authentication — Scaffold checks before every dispatch and tells you if credentials need refreshing

Methodology Presets

Not every project needs all 60 steps. Choose a methodology when you run scaffold init:

deep (depth 5)

All steps enabled. Comprehensive analysis of every angle — domain modeling, ADRs, security review, traceability matrix, the works. At depth 4-5, review steps dispatch to Codex/Gemini CLIs for multi-model validation. Best for complex systems, team projects, or when you want thorough documentation.

mvp (depth 1)

Only 7 critical steps: create-prd, review-prd, user-stories, review-user-stories, tdd, implementation-plan, and implementation-playbook. Minimal ceremony — get to code fast. Best for prototypes, hackathons, or solo projects.

custom (configurable)

You choose which steps to enable and set a default depth (1-5). You can also override depth per step. Best when you know which parts of the pipeline matter for your project.

You can change methodology mid-pipeline with scaffold init --methodology <preset>. Scaffold preserves your completed work and adjusts what's remaining.

CLI Commands

| Command | What It Does | |---------|-------------| | scaffold init | Initialize .scaffold/ with config, state, and decisions log | | scaffold run <step> | Execute a pipeline step (assembles and outputs the full prompt) | | scaffold build | Generate platform adapter output (commands/, AGENTS.md, etc.) | | scaffold adopt | Bootstrap state from existing artifacts (brownfield projects) | | scaffold skip <step> [<step2>...] | Skip one or more steps with a reason | | scaffold complete <step> | Mark a step as completed (for steps executed outside scaffold run) | | scaffold reset <step> | Reset a step back to pending | | scaffold status [--compact] | Show pipeline progress (--compact shows only remaining work) | | scaffold next | List next unblocked step(s) | | scaffold check <step> | Check if a conditional step applies to this project | | scaffold validate | Validate meta-prompts, config, state, and dependency graph | | scaffold list | List all steps with status | | scaffold info <step> | Show full metadata for a step | | scaffold version | Show Scaffold version | | scaffold update | Update to the latest version | | scaffold dashboard | Open a visual progress dashboard in your browser | | scaffold decisions | Show all logged decisions | | scaffold knowledge | Manage project-local knowledge base overrides | | scaffold skill install | Install scaffold skills into the current project | | scaffold skill list | Show available skills and installation status | | scaffold skill remove | Remove scaffold skills from the current project |

Examples

# Initialize a new project with deep methodology
scaffold init

# Run a specific step
scaffold run create-prd

# See what's next
scaffold next

# Check full pipeline status
scaffold status

# See only remaining work
scaffold status --compact

# Skip multiple steps at once
scaffold skip design-system add-e2e-testing --reason "backend-only project"

# Check if a step applies before running it
scaffold check add-e2e-testing
# → Applicable: yes | Platform: web | Brownfield: no | Mode: fresh

scaffold check automated-pr-review
# → Applicable: yes | GitHub remote: yes | Available CLIs: codex, gemini | Recommended: local-cli (dual-model)

scaffold check ai-memory-setup
# → Rules: no | MCP server: none | Hooks: none | Mode: fresh

# Re-run a completed step in update mode
scaffold reset review-prd --force
scaffold run review-prd

# Open the visual dashboard
scaffold dashboard

Knowledge System

Scaffold ships with 60 domain expertise entries organized in seven categories:

  • core/ (25 entries) — eval craft, testing strategy, domain modeling, API design, database design, system architecture, ADR craft, security best practices, operations, task decomposition, user stories, UX specification, design system tokens, user story innovation, AI memory management, coding conventions, tech stack selection, project structure patterns, task tracking, CLAUDE.md patterns, multi-model review dispatch, review step template, dev environment, git workflow patterns, automated review tooling
  • product/ (3 entries) — PRD craft, PRD innovation, gap analysis
  • review/ (13 entries) — review methodology (shared), plus domain-specific review passes for PRD, user stories, domain modeling, ADRs, architecture, API design, database design, UX specification, testing, security, operations, implementation tasks
  • validation/ (7 entries) — critical path analysis, cross-phase consistency, scope management, traceability, implementability, decision completeness, dependency validation
  • finalization/ (3 entries) — implementation playbook, developer onboarding, apply-fixes-and-freeze
  • execution/ (4 entries) — TDD execution loop, task claiming strategy, worktree management, enhancement workflow
  • tools/ (3 entries) — release management, version strategy, session analysis

Each pipeline step declares which knowledge entries it needs in its frontmatter. The assembly engine injects them automatically. Knowledge files with a ## Deep Guidance section are optimized for the CLI — only the deep guidance content is loaded into the assembled prompt, skipping the summary to avoid redundancy with the prompt text.

Project-local overrides

Teams can create project-specific knowledge entries in .scaffold/knowledge/ that layer over the global entries:

scaffold knowledge update testing-strategy "We use Playwright for all E2E tests, Jest for unit tests"
scaffold knowledge list                    # See all entries (global + local)
scaffold knowledge show testing-strategy   # View effective content
scaffold knowledge reset testing-strategy  # Remove override, revert to global

Local overrides are committable — the whole team shares enriched, project-specific guidance.

After the Pipeline: Tools & Ongoing Commands

Once your project is scaffolded and you're building features, two categories of commands are available:

Build Phase (Phase 15)

These are stateless pipeline steps — they appear in scaffold next once Phase 14 is complete and can be run repeatedly:

| Command | When to Use | |---------|-------------| | scaffold run single-agent-start | Start the autonomous implementation loop — Claude picks up tasks and builds. | | scaffold run single-agent-resume | Resume where you left off after closing Claude Code. | | scaffold run multi-agent-start | Start parallel implementation with multiple agents in worktrees. | | scaffold run multi-agent-resume | Resume parallel agent work after a break. | | scaffold run quick-task | Create a focused task for a bug fix, refactor, or small improvement. | | scaffold run new-enhancement | Add a new feature to an already-scaffolded project. Updates the PRD, creates new user stories, and sets up tasks with dependencies. |

Utility Tools

These are orthogonal to the pipeline — usable at any time, not tied to pipeline state. Defined in tools/ with category: tool frontmatter:

| Command | When to Use | |---------|-------------| | scaffold run version-bump | Mark a milestone with a version number without the full release ceremony. | | scaffold run release | Ship a new version — changelog, Git tag, and GitHub release. Supports --dry-run, current, and rollback. | | scaffold run version | Show the current Scaffold version. | | scaffold run update | Update Scaffold to the latest version. | | scaffold run dashboard | Open a visual progress dashboard in your browser. | | scaffold run prompt-pipeline | Print the full pipeline reference table. | | scaffold run session-analyzer | Analyze Claude Code session logs for patterns and insights. |

All of these are also available as slash commands (/scaffold:release, /scaffold:quick-task, etc.) when the plugin is installed.

Releasing Your Project

Version bumps (development milestones)

/scaffold:version-bump

Bumps the version number and updates the changelog, but doesn't create tags, push, or publish a GitHub release. Think of it as a checkpoint.

Creating a release

/scaffold:release

Claude analyzes your commits since the last release, suggests whether this is a major, minor, or patch version bump, and walks you through:

  1. Running your project's tests
  2. Updating the version number in your project files
  3. Generating a changelog entry
  4. Creating a Git tag and GitHub release

Options: --dry-run to preview, minor/major/patch to specify the bump, current to release an already-bumped version, rollback to undo.

Glossary

| Term | What It Means | |------|---------------| | Assembly engine | The runtime system that constructs full 7-section prompts from meta-prompts, knowledge entries, project context, and methodology settings. | | CLAUDE.md | A configuration file in your project root that tells Claude Code how to work in your project. | | Depth | A 1-5 scale controlling how thorough each step's analysis is, from MVP-focused (1) to exhaustive (5). | | Frontmatter | The YAML metadata block at the top of meta-prompt files, declaring dependencies, outputs, knowledge entries, and other configuration. | | Knowledge base | 60 domain expertise entries that get injected into prompts. Can be extended with project-local overrides. | | MCP | Model Context Protocol. A way for Claude to use external tools like a headless browser. | | Meta-prompt | A short intent declaration in pipeline/ that gets assembled into a full prompt at runtime. | | Methodology | A preset (deep, mvp, custom) controlling which steps run and at what depth. | | Multi-model review | Independent validation from Codex/Gemini CLIs at depth 4-5, catching blind spots a single model misses. | | PRD | Product Requirements Document. The foundation for everything Scaffold builds. | | Slash commands | Commands in Claude Code starting with /. For example, /scaffold:create-prd. | | Worktrees | A git feature for multiple working copies. Scaffold uses these for parallel agent execution. |

Troubleshooting / FAQ

I ran a command and nothing happened. Make sure Scaffold is installed — run scaffold version or /scaffold:prompt-pipeline in Claude Code.

Which steps can I skip? Use scaffold skip <step> --reason "..." to skip any step. You can skip multiple steps at once: scaffold skip design-system add-e2e-testing --reason "backend-only". The mvp preset only enables 7 critical steps by default. With the custom preset, you choose exactly which steps to run.

Can I go back and re-run a step? Yes. Use scaffold reset <step> --force to reset it to pending, then scaffold run <step>. When re-running a completed step, Scaffold uses update mode — it loads the existing artifact and generates improvements rather than starting from scratch.

Do I need to run every step in one sitting? No. Pipeline state is persisted in .scaffold/state.json. Run scaffold status when you come back to see where you left off, or scaffold next for what's unblocked.

What if Claude asks me a question I don't know the answer to? Say you're not sure. Claude suggests reasonable defaults and explains the trade-offs. You can revisit decisions later.

Can I use this for an existing project? Yes. Run scaffold init — the project detector will identify it as brownfield and suggest the deep methodology. Use scaffold adopt to bootstrap state from existing artifacts.

How do I customize the knowledge base for my project? Use scaffold knowledge update <name> to create a project-local override in .scaffold/knowledge/. It layers over the global entry and is committable for team sharing.

How do I check if an optional step applies to my project? Run scaffold check <step>. For example, scaffold check add-e2e-testing detects whether your project has a web or mobile frontend. scaffold check automated-pr-review checks for a GitHub remote and available review CLIs.

Codex CLI fails with "stdin is not a terminal" Use codex exec "prompt" (headless mode), not bare codex "prompt" (interactive TUI). The multi-model-dispatch skill documents the correct invocation patterns.

Codex CLI fails with "Not inside a trusted directory" Add --skip-git-repo-check flag: codex exec --skip-git-repo-check -s read-only --ephemeral "prompt". This is required when the project hasn't initialized git yet.

Gemini CLI hangs on "Opening authentication page" or returns empty output Gemini's child process relaunch shows a consent prompt that hangs in non-TTY shells. All scaffold Gemini invocations now include NO_BROWSER=true to suppress this. If you're invoking Gemini manually, prepend NO_BROWSER=true gemini -p "...". If auth tokens have actually expired, run ! gemini -p "hello" to re-authenticate interactively. For CI/headless: set GEMINI_API_KEY env var instead of OAuth.

Codex CLI auth expired ("refresh token", "sign in again") Run ! codex login to re-authenticate interactively. For CI/headless: set CODEX_API_KEY env var. Check auth status with codex login status.

How does Scaffold invoke Codex/Gemini under the hood? Scaffold handles CLI invocation automatically — you never need to type these commands. If you're debugging or curious, here are the headless invocation patterns:

# Codex (headless mode — use "exec", NOT bare "codex")
codex exec --skip-git-repo-check -s read-only --ephemeral "Review this artifact..." 2>/dev/null

# Gemini (headless mode — use "-p" flag, NO_BROWSER prevents consent prompt hang)
NO_BROWSER=true gemini -p "Review this artifact..." --output-format json --approval-mode yolo 2>/dev/null

These are documented in detail in the multi-model-dispatch skill.

I upgraded and my pipeline shows old step names Run scaffold status — the state manager automatically migrates old step names (e.g., add-playwrightadd-e2e-testing, multi-model-reviewautomated-pr-review) and removes retired steps.

Architecture (for contributors)

The project is a TypeScript CLI (@zigrivers/scaffold) built with yargs, targeting ES2022/Node16 ESM.

Source layout

src/
├── cli/commands/     # 19 CLI command implementations
├── cli/middleware/    # Project root detection, output mode resolution
├── cli/output/       # Output strategies (interactive, json, auto)
├── core/assembly/    # Assembly engine — meta-prompt → full prompt
├── core/adapters/    # Platform adapters (Claude Code, Codex, Universal)
├── core/dependency/  # DAG builder, topological sort, eligibility
├── core/knowledge/   # Knowledge update assembler
├── state/            # State manager, lock manager, decision logger
├── config/           # Config loading, migration, schema validation
├── project/          # Project detector, CLAUDE.md manager, adoption
├── wizard/           # Init wizard (interactive + --auto)
├── validation/       # Config, state, frontmatter validators
├── types/            # TypeScript types and enums
├── utils/            # FS helpers, errors, levenshtein
└── dashboard/        # HTML dashboard generator

Key modules

  • Assembly engine (src/core/assembly/engine.ts) — Pure orchestrator with no I/O. Constructs 7-section prompts from meta-prompt + knowledge + context + methodology + instructions + depth guidance.
  • State manager (src/state/state-manager.ts) — Atomic writes via tmp + fs.renameSync(). Tracks step status, in-progress records, and next-eligible cache. Includes migration system for step renames and retired steps.
  • Dependency graph (src/core/dependency/) — Kahn's algorithm topological sort with phase-aware ordering and cycle detection.
  • Platform adapters (src/core/adapters/) — 3-step lifecycle (initialize → generateStepWrapper → finalize) producing Claude Code commands, Codex AGENTS.md, or universal markdown.
  • Project detector (src/project/detector.ts) — Scans for file system signals to classify projects as greenfield, brownfield, or v1-migration.
  • Check command (src/cli/commands/check.ts) — Applicability detection for conditional steps (platform detection, GitHub remote detection, CLI availability).

Content layout

pipeline/             # 60 meta-prompts organized by 16 phases (phases 0-15, including build)
tools/                # 7 tool meta-prompts (stateless, category: tool)
knowledge/            # 60 domain expertise entries (core, product, review, validation, finalization, execution, tools)
methodology/          # 3 YAML presets (deep, mvp, custom)
commands/             # 80 Claude Code slash commands (60 pipeline + 13 build-phase + 7 tools)
skills/               # 3 Claude Code skills (pipeline reference, runner, multi-model dispatch)

Testing

  • Vitest for unit and E2E tests (73 test files, 997 tests, 90% coverage)
  • Performance benchmarks — assembly p95 < 500ms, state I/O p95 < 100ms, graph build p95 < 2s
  • Shell script tests via bats (70 tests covering dashboard, worktree, frontmatter, install/uninstall)
  • Meta-evals — 39 cross-system consistency checks validating pipeline ↔ command ↔ knowledge integrity
  • Coverage thresholds — CI enforces 84/80/88/84 minimums (statements/branches/functions/lines)
  • Run: npm test (unit + E2E), npm run test:perf (performance), make check (bash gates), make check-all (full CI gate)

Contributing

  1. Meta-prompt content lives in pipeline/ — edit the relevant .md file
  2. Run scaffold build to regenerate commands/ from pipeline meta-prompts
  3. Run make check-all (lint + type-check + test + evals) before submitting
  4. Knowledge entries live in knowledge/ — follow the existing frontmatter schema
  5. ADRs documenting architectural decisions are in docs/v2/adrs/

License

MIT