@llms-sdk/prompt
v2.2.0
Published
Stable prompt template system for LLMS SDK
Readme
@llms-sdk/prompt
Convention-based prompt template system with multi-step workflows, parking lot processing, and full SDK integration.
Transform your AI workflows from one-off scripts to maintainable, discoverable prompt templates that scale from rapid experimentation to production deployment.
✨ Key Features
🎯 Convention-Based Discovery - Folder name = command name 🔄 Parking Lot Workflow - Chronological iteration through prompts 📊 Multi-Step Sequences - Variable carryforward between phases 🧠 Complexity Analysis - AI-powered breakdown suggestions 🛠️ Development Mode - Full @llms-sdk/* integration 📦 Production Ready - Stable CLI for deployment
🚀 Quick Start
Installation
npm install @llms-sdk/promptYour First Prompt
# List available prompts
npx @llms-sdk/prompt list
# Execute a prompt interactively
npx @llms-sdk/prompt quick-security-check
# Use variables directly
npx @llms-sdk/prompt security-audit --variables '{"target_system":"API","audit_scope":"full"}'Create Your Own
# Single file prompt
echo "Analyze {target} for {criteria}" > prompts/my-analysis.md
npx @llms-sdk/prompt my-analysis
# Multi-step sequence
mkdir -p prompts/my-workflow
echo "Phase 1: {input}" > prompts/my-workflow/01-discovery.md
echo "Phase 2: {step_1_output}" > prompts/my-workflow/02-analysis.md
npx @llms-sdk/prompt my-workflow🎭 Three Discovery Patterns
graph LR
A[prompts/] --> B[📄 Single Files]
A --> C[📁 Folder Structure]
A --> D[🔢 Multi-Step Sequence]
B --> B1[quick-check.md]
C --> C1[build-analysis/01-prompt.md]
C --> C2[build-analysis/02-runner.ts]
D --> D1[security-audit/01-recon.md]
D --> D2[security-audit/02-scan.md]
D --> D3[security-audit/03-report.md]
style B fill:#e1f5fe
style C fill:#f3e5f5
style D fill:#fff3e0📚 Complete Documentation
📖 Getting Started Guide
Step-by-step tutorial from installation to advanced workflows
📋 User Guide
Comprehensive feature documentation with examples
🏗️ Architecture Guide
System design, discovery patterns, and execution flows
🎯 Examples Collection
Real-world use cases and patterns
📚 API Reference
Complete CLI commands and options
🔄 Workflow Examples
Parking Lot Processing
# Process all single .md files chronologically (oldest first)
npx @llms-sdk/prompt parking-lot
# Filter by keywords and age
npx @llms-sdk/prompt parking-lot --include="security,audit" --max-age=30Multi-Step Sequences
# Execute 5-phase security audit with variable carryforward
npx @llms-sdk/prompt security-audit
# → 01-reconnaissance → 02-vulnerability-scan → 03-exploit-analysis → 04-mitigation → 05-reportDevelopment Mode
# Full SDK integration for rapid iteration
tsx scripts/prompt-dev.ts build-failure-analysis
# Template-only processing
tsx scripts/prompt-dev.ts security-audit --template-only🎨 Template Variables
Templates use {variable} syntax with automatic discovery:
Analyze the **{target_system}** for security vulnerabilities.
Scope: {audit_scope}
Timeline: {completion_date}
Previous findings: {step_1_output}The system automatically:
- ✅ Extracts all
{variables}from templates - ✅ Prompts interactively for missing values
- ✅ Validates all variables are provided
- ✅ Carries forward outputs between sequence steps
🛠️ Development Integration
Create 02-local-runner.ts for full SDK access:
import { createClient } from "@llms-sdk/core";
import { createBashTool } from "@llms-sdk/toolkit";
export async function run(variables: Record<string, string>, renderedPrompt: string) {
const client = createClient("anthropic");
const result = await client.ask({
messages: [{ role: "user", content: renderedPrompt }],
tools: [createBashTool()],
});
// Execute tools, analyze results, iterate rapidly
console.log("Result:", result.content);
}🎯 Use Cases
- 🔍 Code Analysis - Build failures, security audits, performance reviews
- 📋 Documentation - API docs, architecture reviews, compliance reports
- 🚀 DevOps Workflows - Deployment checks, monitoring setup, incident response
- 🧪 Research - Market analysis, competitive research, technical evaluations
- 📊 Data Processing - Log analysis, metrics review, trend identification
🤝 Contributing
- Follow naming conventions - Folder name = command name
- Test both CLIs - Stable (
npx @llms-sdk/prompt) and development (tsx scripts/prompt-dev.ts) - Add comprehensive metadata - Use
meta.jsonfor documentation - Include development runners - Add
02-local-runner.tsfor SDK integration - Document variables clearly - Explain expected values and formats
📄 License
MIT License - see LICENSE for details.
🔗 Related Packages
- @llms-sdk/core - Multi-provider LLM client
- @llms-sdk/toolkit - Tool collection and MCP integration
- @llms-sdk/terminal - Terminal UI framework
Testing prp_runner.py and prp_runner.ts
To verify that both the Python and TypeScript PRP runners behave identically, run the following from the root of this package:
bash project/PRPs/scripts/test_prp_runner.shPrerequisites:
- Python 3
- Node.js with npx and ts-node
- jq (for JSON comparison)
What it does:
- Creates a test PRP file and a dummy model
- Runs both runners and compares their outputs
- Prints
✅ Outputs match!if they are identical (ignoring session_id)
All test artifacts are cleaned up automatically.
Ready to transform your AI workflows? Start with the Getting Started Guide! 🚀
