@hustle-together/api-dev-tools
v3.3.0
Published
Interview-driven, research-first API development workflow with continuous verification loops for Claude Code
Maintainers
Readme
API Development Tools for Claude Code v3.0
Interview-driven, research-first API development workflow with continuous verification loops
Automates the complete API development lifecycle from understanding requirements to deployment through structured interviews, adaptive research, test-driven development, and comprehensive documentation with automatic re-grounding.
What's New in v3.0
- Phase 0: Pre-Research Disambiguation - Search variations to clarify ambiguous terms
- Phase 9: Verify - Re-research after tests pass to catch memory-based errors
- Adaptive Research - Propose searches based on interview answers, not shotgun approach
- Questions FROM Research - Interview questions generated from discovered parameters
- 7-Turn Re-grounding - Periodic context injection prevents memory dilution
- Research Freshness - 7-day cache with automatic staleness warnings
- Loop-Back Architecture - Every verification loops back if not successful
Quick Start
# Install in your project
npx @hustle-together/api-dev-tools --scope=project
# Start developing an API
/api-create my-endpointWhat This Installs
.claude/
├── commands/ # Slash commands (*.md)
├── hooks/ # Enforcement hooks (*.py)
├── settings.json # Hook configuration
├── api-dev-state.json # Workflow state tracking
└── research/ # Cached research with freshness
scripts/
└── api-dev-tools/ # Manifest generation scripts (*.ts)
src/app/
├── api-test/ # Test UI page (if Next.js)
└── api/test-structure/# Parser API route (if Next.js)Slash Commands
Six powerful slash commands for Claude Code:
/api-create [endpoint]- Complete workflow (Disambiguate → Research → Interview → TDD → Verify → Docs)/api-interview [endpoint]- Dynamic questions GENERATED from research findings/api-research [library]- Adaptive propose-approve research flow/api-verify [endpoint]- Re-research and compare implementation to docs/api-env [endpoint]- Check required API keys and environment setup/api-status [endpoint]- Track implementation progress and phase completion
Enforcement Hooks (8 total)
Python hooks that provide real programmatic guarantees:
| Hook | Event | Purpose |
|------|-------|---------|
| session-startup.py | SessionStart | Inject current state at session start |
| enforce-external-research.py | UserPromptSubmit | Detect API terms, require research |
| enforce-research.py | PreToolUse | Block writes until research complete |
| enforce-interview.py | PreToolUse | Inject interview decisions on writes |
| verify-implementation.py | PreToolUse | Check test file exists before route |
| track-tool-use.py | PostToolUse | Log research, count turns |
| periodic-reground.py | PostToolUse | Re-inject context every 7 turns |
| verify-after-green.py | PostToolUse | Trigger Phase 9 after tests pass |
| api-workflow-check.py | Stop | Block completion if phases incomplete |
State Tracking
.claude/api-dev-state.json- Persistent workflow state with turn counting.claude/research/- Cached research with freshness tracking.claude/research/index.json- Research catalog with 7-day validity
Manifest Generation Scripts (Programmatic, NO LLM)
Scripts that automatically generate documentation from test files:
| Script | Output | Purpose |
|--------|--------|---------|
| generate-test-manifest.ts | api-tests-manifest.json | Parses Vitest tests → full API manifest |
| extract-parameters.ts | parameter-matrix.json | Extracts Zod params → coverage matrix |
| collect-test-results.ts | test-results.json | Runs tests → results with pass/fail |
Key principle: Tests are the SOURCE OF TRUTH. Scripts parse source files programmatically. NO LLM involvement in manifest generation.
Scripts are triggered automatically after tests pass (Phase 8 → Phase 9) via verify-after-green.py hook.
Complete Phase Flow (12 Phases)
┌─ PHASE 0: DISAMBIGUATION ─────────────────────┐
│ Search 3-5 variations, clarify ambiguity │
│ Loop back if still unclear │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 1: SCOPE CONFIRMATION ─────────────────┐
│ Confirm understanding of endpoint purpose │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 2: INITIAL RESEARCH ───────────────────┐
│ 2-3 targeted searches (Context7, WebSearch) │
│ Present summary, ask to proceed or search more│
│ Loop back if user wants more │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 3: INTERVIEW ──────────────────────────┐
│ Questions GENERATED from research findings │
│ - Enum params → multi-select │
│ - Continuous ranges → test strategy │
│ - Boolean params → enable/disable │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 4: DEEP RESEARCH (Adaptive) ───────────┐
│ PROPOSE searches based on interview answers │
│ User approves/modifies/skips │
│ Not shotgun - targeted to selections │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 5: SCHEMA DESIGN ──────────────────────┐
│ Create Zod schema from research + interview │
│ Loop back if schema wrong │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 6: ENVIRONMENT CHECK ──────────────────┐
│ Verify API keys exist │
│ Loop back if keys missing │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 7: TDD RED ────────────────────────────┐
│ Write failing tests from schema + decisions │
│ Test matrix derived from interview │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 8: TDD GREEN ──────────────────────────┐
│ Minimal implementation to pass tests │
└───────────────────────────────────────────────┘
│
▼
┌─ MANIFEST GENERATION (Automatic) ─────────────┐
│ verify-after-green.py hook triggers: │
│ • generate-test-manifest.ts │
│ • extract-parameters.ts │
│ • collect-test-results.ts │
│ │
│ Output: api-tests-manifest.json (NO LLM) │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 9: VERIFY (NEW) ───────────────────────┐
│ Re-read original documentation │
│ Compare implementation to docs feature-by- │
│ feature. Find gaps. │
│ │
│ Fix gaps? → Loop back to Phase 7 │
│ Skip (intentional)? → Document as omissions │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 10: TDD REFACTOR ──────────────────────┐
│ Clean up code, tests still pass │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 11: DOCUMENTATION ─────────────────────┐
│ Update manifests, OpenAPI, cache research │
└───────────────────────────────────────────────┘
│
▼
┌─ PHASE 12: COMPLETION ────────────────────────┐
│ All phases verified, commit │
└───────────────────────────────────────────────┘Key Features
Adaptive Research (Not Shotgun)
OLD: Run 20 searches blindly
NEW:
- Run 2-3 initial searches
- Present summary
- PROPOSE additional searches based on context
- User approves/modifies
- Execute only approved searches
Questions FROM Research
OLD: Generic template questions
"Which AI provider should this endpoint support?"NEW: Questions generated from discovered parameters
Based on research, Brandfetch API has 7 parameters:
1. DOMAIN (required) - string
→ No question needed
2. FORMAT: ["json", "svg", "png", "raw"]
Q: Which formats do you need?
3. QUALITY: 1-100 (continuous range)
Q: How should we TEST this?
[ ] All values (100 tests)
[x] Boundary (1, 50, 100)Phase 9: Verify (Catches Memory Errors)
After tests pass, automatically:
- Re-read original documentation
- Compare feature-by-feature
- Report discrepancies:
│ Feature │ In Docs │ Implemented │ Status │
├───────────────┼─────────┼─────────────┼─────────────│
│ domain param │ ✓ │ ✓ │ ✅ Match │
│ format opts │ 4 │ 3 │ ⚠️ Missing │
│ webhook │ ✓ │ ✗ │ ℹ️ Optional │
Fix gaps? [Y] → Return to Phase 7
Skip? [n] → Document as omissions7-Turn Re-grounding
Prevents context dilution in long sessions:
- Every 7 turns, hook injects state summary
- Current phase, decisions, research cache location
- Ensures Claude stays aware of workflow
Research Freshness
Research is cached with 7-day validity:
.claude/research/
├── brandfetch/
│ ├── 2025-12-08_initial.md
│ └── CURRENT.md
└── index.json ← Tracks freshnessIf research is >7 days old:
⚠️ Research for "brandfetch" is 15 days old.
Re-research before using? [Y/n]Installation
cd your-project
npx @hustle-together/api-dev-tools --scope=projectTeam-Wide Auto-Installation
Add to package.json:
{
"scripts": {
"postinstall": "npx @hustle-together/api-dev-tools --scope=project"
}
}Command Reference
/api-create [endpoint-name]
Complete 12-phase workflow. See commands/api-create.md.
/api-interview [endpoint-name]
Dynamic interview with questions FROM research. See commands/api-interview.md.
/api-research [library-name]
Adaptive propose-approve research. See commands/api-research.md.
/api-verify [endpoint-name]
Manual Phase 9 verification. See commands/api-verify.md.
/api-env [endpoint-name]
Environment and API key check. See commands/api-env.md.
/api-status [endpoint-name]
Progress tracking. See commands/api-status.md.
State File Structure (v3.0)
{
"version": "3.0.0",
"endpoint": "brandfetch",
"turn_count": 23,
"phases": {
"disambiguation": { "status": "complete" },
"scope": { "status": "complete" },
"research_initial": { "status": "complete", "sources": [...] },
"interview": {
"status": "complete",
"decisions": {
"format": ["json", "svg", "png"],
"quality_testing": "boundary"
}
},
"research_deep": {
"status": "complete",
"proposed_searches": [...],
"approved_searches": [...],
"skipped_searches": [...]
},
"schema_creation": { "status": "complete" },
"environment_check": { "status": "complete" },
"tdd_red": { "status": "complete", "test_count": 23 },
"tdd_green": { "status": "complete" },
"verify": {
"status": "complete",
"gaps_found": 2,
"gaps_fixed": 2,
"intentional_omissions": ["webhook support"]
},
"tdd_refactor": { "status": "complete" },
"documentation": { "status": "complete" }
},
"manifest_generation": {
"last_run": "2025-12-09T10:30:00.000Z",
"manifest_generated": true,
"parameters_extracted": true,
"test_results_collected": true,
"output_files": {
"manifest": "src/app/api-test/api-tests-manifest.json",
"parameters": "src/app/api-test/parameter-matrix.json",
"results": "src/app/api-test/test-results.json"
}
},
"reground_history": [
{ "turn": 7, "phase": "interview" },
{ "turn": 14, "phase": "tdd_red" }
]
}Hook Architecture
┌──────────────────────────────────────────────────────────┐
│ SESSION START │
│ → session-startup.py injects current state │
└──────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────┐
│ USER PROMPT │
│ → enforce-external-research.py detects API terms │
└──────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────┐
│ RESEARCH TOOLS (WebSearch, Context7) │
│ → track-tool-use.py logs activity, counts turns │
│ → periodic-reground.py injects summary every 7 turns │
└──────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────┐
│ WRITE/EDIT TOOLS │
│ → enforce-research.py blocks if no research │
│ → enforce-interview.py injects decisions │
│ → verify-implementation.py checks test file exists │
└──────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────┐
│ TEST COMMANDS (pnpm test) │
│ → verify-after-green.py triggers Phase 9 after pass │
└──────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────┐
│ STOP │
│ → api-workflow-check.py blocks if phases incomplete │
└──────────────────────────────────────────────────────────┘Manual Script Usage
While scripts run automatically after tests pass, you can also run them manually:
# Generate manifest from test files
npx tsx scripts/api-dev-tools/generate-test-manifest.ts
# Extract parameters and calculate coverage
npx tsx scripts/api-dev-tools/extract-parameters.ts
# Collect test results (runs Vitest)
npx tsx scripts/api-dev-tools/collect-test-results.tsOutput files are written to src/app/api-test/:
api-tests-manifest.json- Complete API documentationparameter-matrix.json- Parameter coverage analysistest-results.json- Latest test run results
Requirements
- Node.js 14.0.0 or higher
- Python 3 (for enforcement hooks)
- Claude Code (CLI tool for Claude)
- Project structure with
.claude/commands/support
MCP Servers (Auto-installed)
Context7
- Live documentation from library source code
- Current API parameters (not training data)
GitHub
- Issue management
- Pull request creation
Set GITHUB_PERSONAL_ACCESS_TOKEN for GitHub MCP.
Acknowledgments
- @wbern/claude-instructions - TDD commands
- Anthropic Interviewer - Interview methodology
- Context7 - Live documentation lookup
License
MIT License
Links
- Repository: https://github.com/hustle-together/api-dev-tools
- NPM: https://www.npmjs.com/package/@hustle-together/api-dev-tools
Made with care for API developers using Claude Code
"Disambiguate, research, interview, verify, repeat"
