prd-parser
v0.4.1
Published
Turn your PRD into a ready-to-work beads project in one command
Maintainers
Readme
prd-parser
🚧 Active Development - This project is new and actively evolving. Expect breaking changes. Contributions and feedback welcome!
Turn your PRD into a ready-to-work beads project in one command.
prd-parser uses LLM guardrails to transform Product Requirements Documents into a hierarchical issue structure (Epics → Tasks → Subtasks) and creates them directly in beads - the git-backed issue tracker for AI-driven development.
# One command: PRD → structured beads issues
prd-parser parse ./docs/prd.mdThe 0→1 Problem
Starting a new project is exciting. You have a vision, maybe a PRD, and you're ready to build. But then:
The breakdown problem - You need to turn that PRD into actionable tasks. This is tedious and error-prone. You lose context as you go.
The context problem - By the time you're implementing subtask #47, you've forgotten why it matters. What was the business goal? Who are the users? What constraints apply?
The handoff problem - If you're using AI to help implement, it needs that context too. Copy-pasting from your PRD for every task doesn't scale.
prd-parser + beads solves all three. Write your PRD once, run one command, and get a complete project structure with context propagated to every level - ready for you or Claude to start implementing.
Why prd-parser + beads?
For greenfield projects, this is the fastest path from idea to structured, trackable work:
| Without prd-parser | With prd-parser | |-------------------|-----------------| | Read PRD, manually create issues | One command | | Forget context by subtask #10 | Context propagated everywhere | | Testing requirements? Maybe later | Testing enforced at every level | | Dependencies tracked in your head | Dependencies explicit and tracked | | Copy-paste context for AI helpers | AI has full context in every issue |
How it works: prd-parser uses Go struct guardrails to force the LLM to output valid, hierarchical JSON with:
- Context propagation - Business purpose flows from PRD → Epic → Task → Subtask
- Testing at every level - Unit, integration, type, and E2E requirements enforced
- Dependencies tracked - Issues know what blocks them
- Direct beads integration - Issues created with one command, ready to work
Getting Started: Your First PRD → beads Project
1. Install prd-parser
Via npm/bun (easiest):
npm install -g prd-parser
# or
bun install -g prd-parser
# or
npx prd-parser parse ./docs/prd.md # run without installingVia Go:
go install github.com/dhabedank/prd-parser@latestFrom source:
cd /tmp && git clone https://github.com/dhabedank/prd-parser.git && cd prd-parser && make installIf you see "Make sure ~/go/bin is in your PATH", run:
echo 'export PATH="$HOME/go/bin:$PATH"' >> ~/.zshrc && source ~/.zshrcNow go back to your project.
2. Create a new project with beads
mkdir my-project && cd my-project
git init
bd init --prefix my-project3. Write your PRD
Create docs/prd.md with your product requirements. Include:
- What you're building and why
- Who the target users are
- Technical constraints
- Key features
Example:
# Task Management CLI
## Overview
A fast, developer-friendly command-line task manager for teams
who prefer terminal workflows.
## Target Users
Software developers who live in the terminal and want sub-100ms
task operations without context-switching to a GUI.
## Core Features
1. Create tasks with title, description, priority
2. List and filter tasks by status/priority
3. Update task status (todo → in-progress → done)
4. Local JSON storage for offline-first operation
## Technical Constraints
- Sub-100ms response for all operations
- Single binary, no runtime dependencies
- Config stored in ~/.taskman/4. Parse your PRD into beads issues
prd-parser parse docs/prd.mdFull context mode is enabled by default - every generation stage has access to your original PRD, producing the most coherent results.
That's it. Your PRD is now a structured beads project with readable hierarchical IDs:
$ bd list
○ my-project-e1 [P1] [epic] - Core Task Management System
○ my-project-e1t1 [P0] [task] - Implement Task Data Model
○ my-project-e1t1s1 [P2] [task] - Define Task struct with JSON tags
○ my-project-e1t1s2 [P2] [task] - Implement JSON file storage
○ my-project-e1t2 [P1] [task] - Build CLI Interface
○ my-project-e2 [P1] [epic] - User Authentication
...IDs follow a logical hierarchy: e1 (epic 1) → e1t1 (task 1) → e1t1s1 (subtask 1). Use bd show <id> to see parent/children relationships.
5. Start working with beads + Claude
# See what's ready to work on
bd ready
# Pick an issue and let Claude implement it
bd show my-project-4og # Shows full context, testing requirements
# Or let Claude pick and work autonomously
# (beads integrates with Claude Code via the beads skill)What prd-parser Creates
Hierarchical Structure
Epic: Core Task Management System
├── Task: Implement Task Data Model
│ ├── Subtask: Define Task struct with JSON tags
│ └── Subtask: Implement JSON file storage
├── Task: Build CLI Interface
│ ├── Subtask: Implement create command
│ └── Subtask: Implement list command
└── ...Context Propagation
Every issue includes propagated context so implementers understand WHY:
**Context:**
- **Business Context:** Developers need fast, frictionless task management
- **Target Users:** Terminal-first developers who want <100ms operations
- **Success Metrics:** All CRUD operations complete in under 100msTesting Requirements
Every issue specifies what testing is needed:
**Testing Requirements:**
- **Unit Tests:** Task struct validation, JSON marshaling/unmarshaling
- **Integration Tests:** Full storage layer integration, concurrent access
- **Type Tests:** Go struct tags validation, JSON schema compliancePriority Evaluation
The LLM evaluates each task and assigns appropriate priority (not just a default):
| Priority | When to Use | |----------|-------------| | P0 (critical) | Blocks all work, security issues, launch blockers | | P1 (high) | Core functionality, enables other tasks | | P2 (medium) | Important features, standard work | | P3 (low) | Nice-to-haves, polish | | P4 (very-low) | Future considerations, can defer indefinitely |
Foundation/setup work gets higher priority. Polish/UI tweaks get lower priority.
Labels
Issues are automatically labeled based on:
- Layer: frontend, backend, api, database, infra
- Domain: auth, payments, search, notifications
- Skill: react, go, sql, typescript
- Type: setup, feature, refactor, testing
Labels are extracted from the PRD's tech stack and feature descriptions.
Design Notes & Acceptance Criteria
- Epics include acceptance criteria for when the epic is complete
- Tasks include design notes for technical approach
Time Estimates
All items include time estimates that flow to beads:
- Epics: estimated days
- Tasks: estimated hours
- Subtasks: estimated minutes
Dependencies
Issues are linked with proper blocking relationships:
- Tasks depend on setup tasks
- Subtasks depend on parent task completion
- Cross-epic dependencies are tracked
Configuration
Setup Wizard
The easiest way to configure prd-parser is with the interactive setup wizard:
prd-parser setupThe wizard guides you through selecting models for each parsing stage:
- Epic Model (Stage 1): Generates epics from your PRD
- Task Model (Stage 2): Generates tasks for each epic
- Subtask Model (Stage 3): Generates subtasks for each task
Configuration is saved to ~/.prd-parser.yaml.
To reset to defaults:
prd-parser setup --resetPer-Stage Model Configuration
Use different models for different stages to optimize for cost vs. quality:
# Use Opus for epics (complex), Sonnet for tasks, Haiku for subtasks (fast)
prd-parser parse docs/prd.md \
--epic-model claude-opus-4-20250514 \
--task-model claude-sonnet-4-20250514 \
--subtask-model claude-3-5-haiku-20241022Or configure in ~/.prd-parser.yaml:
epic_model: claude-opus-4-20250514
task_model: claude-sonnet-4-20250514
subtask_model: claude-3-5-haiku-20241022Command-line flags always override config file settings.
Parse Options
# Basic parse (full context mode is on by default)
prd-parser parse ./prd.md
# Control structure size
prd-parser parse ./prd.md --epics 5 --tasks 8 --subtasks 4
# Set default priority
prd-parser parse ./prd.md --priority high
# Choose testing level
prd-parser parse ./prd.md --testing comprehensive # or minimal, standard
# Preview without creating (dry run)
prd-parser parse ./prd.md --dry-run
# Save/resume from checkpoint (useful for large PRDs)
prd-parser parse ./prd.md --save-json checkpoint.json
prd-parser parse --from-json checkpoint.json
# Disable full context mode (not recommended)
prd-parser parse ./prd.md --full-context=falseFull Options
| Flag | Short | Default | Description |
|------|-------|---------|-------------|
| --epics | -e | 3 | Target number of epics |
| --tasks | -t | 5 | Target tasks per epic |
| --subtasks | -s | 4 | Target subtasks per task |
| --priority | -p | medium | Default priority (critical/high/medium/low) |
| --testing | | comprehensive | Testing level (minimal/standard/comprehensive) |
| --llm | -l | auto | LLM provider (auto/claude-cli/codex-cli/anthropic-api) |
| --model | -m | | Model to use (provider-specific) |
| --epic-model | | | Model for epic generation (Stage 1) |
| --task-model | | | Model for task generation (Stage 2) |
| --subtask-model | | | Model for subtask generation (Stage 3) |
| --no-progress | | false | Disable TUI progress display |
| --multi-stage | | false | Force multi-stage parsing |
| --single-shot | | false | Force single-shot parsing |
| --smart-threshold | | 300 | Line count for auto multi-stage (0 to disable) |
| --full-context | | true | Pass PRD to all stages (use =false to disable) |
| --validate | | false | Run validation pass to check for gaps |
| --no-review | | false | Disable automatic LLM review pass (review ON by default) |
| --interactive | | false | Human-in-the-loop mode (review epics before task generation) |
| --output | -o | beads | Output adapter (beads/json) |
| --output-path | | | Output path for JSON adapter |
| --dry-run | | false | Preview without creating items |
| --from-json | | | Resume from saved JSON checkpoint (skip LLM) |
| --save-json | | | Save generated JSON to file (for resume) |
| --config | | | Config file path (default: .prd-parser.yaml) |
Smart Parsing (Default Behavior)
prd-parser automatically chooses the best parsing strategy based on PRD size:
- Small PRDs (< 300 lines): Single-shot parsing (faster)
- Large PRDs (≥ 300 lines): Multi-stage parallel parsing (more reliable)
Override with --single-shot or --multi-stage flags, or adjust threshold with --smart-threshold.
Full Context Mode (Default)
Full context mode is enabled by default. Every stage gets the original PRD as their "north star":
prd-parser parse docs/prd.md # full context is on by defaultTo disable (not recommended):
prd-parser parse docs/prd.md --full-context=falseWhy this matters:
| Mode | Stage 1 (Epics) | Stage 2 (Tasks) | Stage 3 (Subtasks) |
|------|----------------|-----------------|-------------------|
| Default | Full PRD | Epic summary only | Task summary only |
| --full-context | Full PRD | Epic + PRD | Task + PRD |
With --full-context, each agent:
- Stays grounded in original requirements
- Doesn't invent features not in the PRD
- Doesn't miss requirements that ARE in the PRD
- Produces more focused, coherent output
Results comparison (same PRD):
| Metric | Default | Full Context | |--------|---------|--------------| | Epics | 11 | 8 | | Tasks | 65 | 49 | | Subtasks | 264 | 202 |
Fewer items with full context = less redundancy, more focused on actual requirements.
Common Flag Combinations
Standard parse (full context on by default):
prd-parser parse docs/prd.mdEvery stage sees the PRD. Best for accuracy and coherence.
Preview before committing:
prd-parser parse docs/prd.md --dry-runSee what would be created without actually creating issues.
Save checkpoint for manual review:
prd-parser parse docs/prd.md --save-json draft.json --dry-run
# Edit draft.json manually
prd-parser parse --from-json draft.jsonHuman-in-the-loop for large/complex PRDs:
prd-parser parse docs/prd.md --interactiveReview and edit epics before task generation.
Quick parse for small PRDs:
prd-parser parse docs/prd.md --single-shotFaster single LLM call. Works well for PRDs under 300 lines.
Cost-optimized for large PRDs:
prd-parser parse docs/prd.md \
--epic-model claude-opus-4-20250514 \
--task-model claude-sonnet-4-20250514 \
--subtask-model claude-3-5-haiku-20241022Use Opus for epics (complex analysis), Sonnet for tasks, Haiku for subtasks (fast, cost-effective).
Maximum validation:
prd-parser parse docs/prd.md --validateFull context + validation pass to catch gaps.
Debug/iterate on structure:
prd-parser parse docs/prd.md --save-json iter1.json --dry-run
# Review iter1.json, note issues
prd-parser parse docs/prd.md --save-json iter2.json --dry-run
# Compare, pick the better one
prd-parser parse --from-json iter2.jsonValidation Pass
Use --validate to run a final review that checks for gaps in the generated plan:
prd-parser parse ./prd.md --validateThis asks the LLM to review the complete plan and identify:
- Missing setup/initialization tasks
- Backend without UI to test it
- Dependencies not installed
- Acceptance criteria that can't be verified
- Tasks in wrong order
Example output:
✓ Plan validation passed - no gaps foundor
⚠ Plan validation found gaps:
• No task to install dependencies after adding @clerk/nextjs
• Auth API built but no login page to test itReview Pass (Default)
By default, prd-parser runs an automatic review pass after generation that checks for and fixes structural issues:
- Missing "Project Foundation" epic as Epic 1 (setup should come first)
- Feature epics not depending on Epic 1 (all work depends on setup)
- Missing setup tasks in foundation epic
- Incorrect dependency chains (setup → backend → frontend)
# Review is on by default
prd-parser parse ./prd.md
# See: "Reviewing structure..."
# See: "✓ Review fixed issues: Added Project Foundation epic..."
# Or: "✓ Review passed - no changes needed"
# Disable if you want raw output
prd-parser parse ./prd.md --no-reviewInteractive Mode
For human-in-the-loop review during generation:
prd-parser parse docs/prd.md --interactiveIn interactive mode, you'll review epics after Stage 1 before task generation continues:
=== Stage 1 Complete: 4 Epics Generated ===
Proposed Epics:
1. Project Foundation (depends on: none)
Initialize Next.js, Convex, Clerk setup
2. Voice Infrastructure (depends on: 1)
Telnyx phone system integration
3. AI Conversations (depends on: 1)
LFM 2.5 integration for call handling
4. CRM Integration (depends on: 1)
Follow Up Boss sync
[Enter] continue, [e] edit in $EDITOR, [r] regenerate, [a] add epic:Options:
- Enter - Accept epics and continue to task generation
- e - Open epics in your
$EDITORfor manual editing - r - Regenerate epics from scratch
- a - Add a new epic
Interactive mode skips the automatic review pass since you are the reviewer.
Checkpoint Workflow (Manual Review)
For full manual control over the generated structure:
Step 1: Generate Draft
prd-parser parse docs/prd.md --save-json draft.json --dry-runStep 2: Review and Edit
Open draft.json in your editor. You can:
- Reorder epics (change array order)
- Add/remove epics, tasks, or subtasks
- Fix dependencies
- Adjust priorities and estimates
Step 3: Create from Edited Draft
prd-parser parse --from-json draft.jsonThe PRD file argument is optional when using --from-json.
Auto-Recovery: If creation fails mid-way, prd-parser saves a checkpoint to /tmp/prd-parser-checkpoint.json. Retry with:
prd-parser parse --from-json /tmp/prd-parser-checkpoint.jsonRefining Issues After Generation
After parsing, you may find issues that are misaligned with your product vision. The refine command lets you correct an issue and automatically propagate fixes to related issues.
Basic Usage
# Correct an epic that went off-track
prd-parser refine test-e6 --feedback "RealHerd is voice-first lead intelligence, not a CRM with pipeline management"
# Preview changes without applying
prd-parser refine test-e3t2 --feedback "Should use OpenRouter, not direct OpenAI" --dry-run
# Include PRD for better context
prd-parser refine test-e6 --feedback "Focus on conversation insights" --prd docs/prd.mdHow It Works
- Analyze: LLM identifies wrong concepts in the target issue (e.g., "pipeline tracking", "deal stages")
- Correct: Generates corrected version with right concepts ("conversation insights", "activity visibility")
- Scan: Searches ALL issues (across all epics) for the same wrong concepts
- Propagate: Regenerates affected issues with correction context
- Update: Applies changes via
bd update
Options
| Flag | Default | Description |
|------|---------|-------------|
| --feedback, -f | required | What's wrong and how to fix it |
| --cascade | true | Also update children of target issue |
| --scan-all | true | Scan all issues for same misalignment |
| --dry-run | false | Preview changes without applying |
| --prd | | Path to PRD file for context |
Example Output
$ prd-parser refine test-e6 --feedback "RealHerd is voice-first, not CRM"
Loading issue test-e6...
Found: Brokerage Dashboard & Reporting
Analyzing misalignment...
Identified misalignment:
- pipeline tracking
- deal management
- contract stages
Corrected version:
Title: Agent Activity Dashboard & Conversation Insights
Description: Real-time visibility into agent conversations...
Scanning for affected issues...
Found 3 children
Found 2 issues with similar misalignment
--- Changes to apply ---
Target: test-e6
+ test-e6t3: Pipeline Overview Component
+ test-e6t4: Deal Tracking Interface
+ test-e3t5: CRM Pipeline Sync
Applying corrections...
✓ Updated test-e6
✓ Updated test-e6t3
✓ Updated test-e6t4
✓ Updated test-e3t5
--- Summary ---
Updated: 1 target + 4 related issuesLLM Providers
Zero-Config (Recommended)
prd-parser auto-detects installed LLM CLIs - no API keys needed:
# If you have Claude Code installed, it just works
prd-parser parse ./prd.md
# If you have Codex installed, it just works
prd-parser parse ./prd.mdDetection Priority
- Claude Code CLI (
claude) - Preferred, already authenticated - Codex CLI (
codex) - Already authenticated - Anthropic API - Fallback if
ANTHROPIC_API_KEYis set
Explicit Selection
# Force specific provider
prd-parser parse ./prd.md --llm claude-cli
prd-parser parse ./prd.md --llm codex-cli
prd-parser parse ./prd.md --llm anthropic-api
# Specify model
prd-parser parse ./prd.md --llm claude-cli --model claude-sonnet-4-20250514
prd-parser parse ./prd.md --llm codex-cli --model o3Output Options
beads (Default)
Creates issues directly in the current beads-initialized project:
bd init --prefix myproject
prd-parser parse ./prd.md --output beads
bd list # See created issuesJSON
Export to JSON for inspection or custom processing:
# Write to file
prd-parser parse ./prd.md --output json --output-path tasks.json
# Write to stdout (pipe to other tools)
prd-parser parse ./prd.md --output json | jq '.epics[0].tasks'The Guardrails System
prd-parser isn't just a prompt wrapper. It uses Go structs as guardrails to enforce valid output:
type Epic struct {
TempID string `json:"temp_id"`
Title string `json:"title"`
Description string `json:"description"`
Context interface{} `json:"context"`
AcceptanceCriteria []string `json:"acceptance_criteria"`
Testing TestingRequirements `json:"testing"`
Tasks []Task `json:"tasks"`
DependsOn []string `json:"depends_on"`
}
type TestingRequirements struct {
UnitTests *string `json:"unit_tests,omitempty"`
IntegrationTests *string `json:"integration_tests,omitempty"`
TypeTests *string `json:"type_tests,omitempty"`
E2ETests *string `json:"e2e_tests,omitempty"`
}The LLM MUST produce output that matches these structs. Missing required fields? Validation fails. Wrong types? Parse fails. This ensures every PRD produces consistent, complete issue structures.
Architecture
prd-parser/
├── cmd/ # CLI commands (Cobra)
│ └── parse.go # Main parse command
├── internal/
│ ├── core/ # Core types and orchestration
│ │ ├── types.go # Hierarchical structs (guardrails)
│ │ ├── prompts.go # Single-shot system/user prompts
│ │ ├── stage_prompts.go # Multi-stage prompts (Stages 1-3)
│ │ ├── parser.go # Single-shot LLM → Output orchestration
│ │ ├── multistage.go # Multi-stage parallel parser
│ │ └── validate.go # Validation pass logic
│ ├── llm/ # LLM adapters
│ │ ├── adapter.go # Interface definition
│ │ ├── claude_cli.go # Claude Code CLI adapter
│ │ ├── codex_cli.go # Codex CLI adapter
│ │ ├── anthropic_api.go # API fallback
│ │ ├── detector.go # Auto-detection logic
│ │ └── multistage_generator.go # Multi-stage LLM calls
│ └── output/ # Output adapters
│ ├── adapter.go # Interface definition
│ ├── beads.go # beads issue tracker
│ └── json.go # JSON file output
└── tests/ # Unit testsAdding Custom Adapters
Custom LLM Adapter
type Adapter interface {
Name() string
IsAvailable() bool
Generate(ctx context.Context, systemPrompt, userPrompt string) (*core.ParseResponse, error)
}Custom Output Adapter
type Adapter interface {
Name() string
IsAvailable() (bool, error)
CreateItems(response *core.ParseResponse, config Config) (*CreateResult, error)
}Related Projects
- beads - Git-backed issue tracker for AI-driven development
- Claude Code - Claude's official CLI with beads integration
License
MIT
