npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@dogtags27/prompt-lint

v0.1.0

Published

Developer-first CLI tool for versioning, testing, and CI/CD safety for LLM prompts

Readme

🚀 PromptLint

Stop treating prompts like strings. Start treating them like production code.

PromptLint is the developer-first CLI tool that brings version control, testing, and CI/CD safety to your LLM prompts. Never lose track of prompt changes again. Never deploy a broken prompt. Never wonder "what changed?" or "why did this break?"

✨ Why PromptLint?

The Problem: Prompts are critical production code, but we treat them like disposable strings. Changes are lost, versions are unclear, and regressions go undetected until users complain.

The Solution: PromptLint gives you:

  • 📦 Semantic versioning for every prompt change
  • 🔍 Visual diffs showing exactly what changed (with cost impact!)
  • 🧪 Automated testing that handles non-deterministic LLM outputs
  • 📊 Regression detection for cost, latency, and behavior changes
  • 🔒 Immutability - released versions are locked and Git-tracked
  • 💰 Cost tracking across versions
  • 🔌 CI/CD ready - fail builds on breaking changes

🎯 Quick Start

Installation

npm install -g @dogtags27/prompt-lint

Create Your First Versioned Prompt

# Create a new prompt artifact
prompt create --id greeting-bot \
  --description "Friendly greeting prompt" \
  --provider openai \
  --model gpt-4

# Output: Created prompts/greeting-bot/v0.1.0.json

Run Your Prompt

# Execute with inputs
prompt run --prompt-id greeting-bot \
  --input name="Alice" \
  --input timeOfDay="morning"

# Or use JSON input
prompt run --prompt-id greeting-bot \
  --json-input '{"name":"Alice","timeOfDay":"morning"}'

See What Changed

# Generate beautiful HTML diff comparing versions
prompt diff --prompt-id greeting-bot \
  --current current \
  --compare v0.1.0 \
  --open

# Opens interactive HTML diff showing:
# - Template changes (side-by-side)
# - Token count changes
# - Cost impact per call
# - Breaking changes detection

🎨 Core Features

1. 📦 Semantic Versioning

Track every change with semantic versioning (major.minor.patch):

# Bump version after making changes
prompt version:bump --prompt-id greeting-bot \
  --version 0.1.0 \
  --type minor

# Major: Breaking changes (output schema, required inputs)
# Minor: New features (optional inputs, extended functionality)  
# Patch: Bug fixes, template improvements

Example workflow:

# 1. Make changes to your prompt template
vim prompts/greeting-bot/v0.1.0.json

# 2. Test the changes
prompt test --prompt-id greeting-bot

# 3. Bump version
prompt version:bump --prompt-id greeting-bot \
  --version 0.1.0 --type patch

# 4. Review diff before committing
prompt diff --prompt-id greeting-bot \
  --current current --compare v0.1.1

2. 🔍 Visual Diff & Change Tracking

See exactly what changed between versions with beautiful HTML diffs:

# Compare current working version with latest saved
prompt diff --prompt-id greeting-bot \
  --current current \
  --compare v0.2.0 \
  --format html \
  --open

# Output includes:
# ✅ Side-by-side template comparison
# ✅ Token count changes (+150 tokens, +12.5%)
# ✅ Cost impact ($0.0003 per call increase)
# ✅ Breaking changes detection
# ✅ Impact assessment (low/medium/high/breaking)

Diff formats:

  • html - Interactive visual diff (opens in browser)
  • markdown - Markdown format for documentation
  • json - Machine-readable diff data

3. 🔎 Auto-Detection & Scanning

Automatically discover prompts in your codebase:

# Scan current directory
prompt scan

# Scan specific file
prompt scan --file src/prompts.ts

# Recursive scan with auto-save
prompt scan --recursive --auto-save

# Interactive mode (confirm before saving)
prompt scan --interactive

# Filter by file type
prompt scan --format ts --recursive

Detects prompts in:

  • TypeScript/JavaScript files
  • JSON files
  • YAML files
  • Inline prompt strings

4. 🧪 Comprehensive Testing

Test prompts with support for non-deterministic outputs:

# Run test suite
prompt test --prompt-id greeting-bot

# Save baseline for regression detection
prompt test --prompt-id greeting-bot --save-baseline

# Compare against baseline
prompt test --prompt-id greeting-bot --compare-baseline

# Generate JUnit XML for CI/CD
prompt test --prompt-id greeting-bot \
  --junit-output test-results.xml

# Generate JSON report
prompt test --prompt-id greeting-bot \
  --json-output test-report.json

Test assertion types:

  • Schema assertions - Validate JSON structure
  • Classification assertions - Check expected categories
  • Tolerance assertions - Numeric comparisons with variance
  • Safety assertions - Content filtering and constraints

Example test suite:

{
  "promptId": "greeting-bot",
  "version": "0.2.0",
  "tests": [
    {
      "name": "Morning greeting",
      "inputs": { "name": "Alice", "timeOfDay": "morning" },
      "assertions": [
        {
          "type": "contains",
          "field": "output",
          "value": "Good morning"
        },
        {
          "type": "schema",
          "schema": {
            "type": "object",
            "required": ["greeting", "tone"]
          }
        }
      ]
    }
  ]
}

5. 🔍 Search & Discovery

Find prompts quickly across your codebase:

# Full-text search
prompt search --query "greeting"

# Search with filters
prompt search --query "customer" \
  --provider openai \
  --has-variable userId \
  --tag production

# Fuzzy matching
prompt search --query "greet" --fuzzy

# Limit results
prompt search --query "bot" --limit 10

Quick find by pattern:

# Find by ID pattern
prompt find --pattern greeting

# Exact match
prompt find --pattern greeting-bot --exact

# Show full template
prompt find --pattern greeting-bot --show-template

# List all variables
prompt find --pattern greeting-bot --list-variables

# Show file location
prompt find --pattern greeting-bot --show-location

6. 📊 Status & Monitoring

Track the state of all your prompts:

# Status of specific prompt
prompt status --prompt-id greeting-bot

# Status of all prompts
prompt status

# Output shows:
# - Current version
# - Latest saved version
# - Git commit status
# - Test status
# - Last modified date

7. ⏪ Rollback & Recovery

Safely rollback to previous versions:

# Rollback to previous version
prompt rollback --prompt-id greeting-bot \
  --version v0.1.0

# Creates new version from rollback target
# Example: v0.2.0 → rollback to v0.1.0 → creates v0.2.1

8. ✅ Validation

Ensure prompt artifacts are valid:

# Validate specific version
prompt validate --prompt-id greeting-bot --version 0.2.0

# Validate from file
prompt validate --file prompts/greeting-bot/v0.2.0.json

# Validate all versions
prompt validate --prompt-id greeting-bot --all

9. 💰 Cost & Token Tracking

Monitor cost impact across versions:

# Diff shows token and cost changes
prompt diff --prompt-id greeting-bot \
  --current v0.2.0 \
  --compare v0.1.0

# Output includes:
# Token Change: +150 tokens (+12.5%)
# Cost Impact: +$0.0003 per call

10. 🔌 CI/CD Integration

Fail builds on breaking changes:

# In your CI pipeline
prompt validate --prompt-id greeting-bot --all
prompt test --prompt-id greeting-bot --compare-baseline
prompt diff --prompt-id greeting-bot --format json

# Exit codes:
# 0 = Success
# 1 = Validation error
# 2 = System error

GitHub Actions example:

- name: Validate Prompts
  run: |
    prompt validate --prompt-id greeting-bot --all
    prompt test --prompt-id greeting-bot --compare-baseline

📁 Project Structure

your-project/
├── prompts/                    # Versioned prompt artifacts
│   └── greeting-bot/
│       ├── v0.1.0.json        # Initial version
│       ├── v0.1.1.json        # Patch version
│       ├── v0.2.0.json        # Minor version
│       └── .baselines/        # Test baselines
│           └── v0.2.0.json
├── tests/                     # Test suites
│   └── prompts/
│       └── greeting-bot/
│           └── suite.json
├── .prompt-lint/              # Internal index and metadata
│   ├── index.json
│   └── diffs/
└── .promptrc.json             # Configuration (optional)

⚙️ Configuration

Create .promptrc.json in your project root:

{
  "baseDir": ".",
  "apiKeys": {
    "openai": "sk-...",
    "anthropic": "sk-ant-..."
  },
  "defaults": {
    "provider": "openai",
    "temperature": 0.7,
    "maxTokens": 2000
  },
  "test": {
    "runs": 3,
    "timeout": 30000
  },
  "ci": {
    "costThreshold": 20,
    "latencyThreshold": 30
  }
}

Or use environment variables (recommended):

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."

📖 Prompt Artifact Format

A prompt artifact is a structured JSON file:

{
  "id": "greeting-bot",
  "version": "0.2.0",
  "template": "Hello {{name}}! Good {{timeOfDay}}!",
  "inputs": {
    "required": {
      "name": {
        "type": "string",
        "description": "User's name"
      }
    },
    "optional": {
      "timeOfDay": {
        "type": "string",
        "description": "Time of day",
        "default": "day"
      }
    }
  },
  "model": {
    "provider": "openai",
    "model": "gpt-4",
    "temperature": 0.7,
    "maxTokens": 2000
  },
  "output": {
    "format": "json",
    "schema": {
      "type": "object",
      "properties": {
        "greeting": { "type": "string" },
        "tone": { "type": "string" }
      },
      "required": ["greeting", "tone"]
    }
  },
  "metadata": {
    "description": "Friendly greeting prompt",
    "author": "Your Name",
    "tags": ["production", "customer-facing"]
  }
}

🎯 Common Workflows

Workflow 1: Making Prompt Changes

# 1. Edit your prompt
vim prompts/greeting-bot/v0.2.0.json

# 2. Test locally
prompt run --prompt-id greeting-bot \
  --input name="Alice"

# 3. Run tests
prompt test --prompt-id greeting-bot

# 4. Check diff before committing
prompt diff --prompt-id greeting-bot \
  --current current --compare v0.2.0

# 5. Bump version
prompt version:bump --prompt-id greeting-bot \
  --version 0.2.0 --type patch

# 6. Commit to Git
git add prompts/
git commit -m "feat: improve greeting prompt"

Workflow 2: Onboarding Existing Prompts

# 1. Scan your codebase
prompt scan --recursive

# 2. Review detected prompts
prompt status

# 3. Save detected prompts
prompt scan --interactive --auto-save

# 4. Create test suites
prompt test --prompt-id my-prompt --save-baseline

Workflow 3: CI/CD Pipeline

# In your CI script
#!/bin/bash
set -e

# Validate all prompts
prompt validate --prompt-id greeting-bot --all

# Run tests and compare baselines
prompt test --prompt-id greeting-bot \
  --compare-baseline \
  --junit-output test-results.xml

# Check for breaking changes
DIFF=$(prompt diff --prompt-id greeting-bot \
  --current current \
  --compare latest \
  --format json)

if echo "$DIFF" | jq '.summary.breakingChanges' | grep -q true; then
  echo "❌ Breaking changes detected!"
  exit 1
fi

🛠️ All Commands

| Command | Description | Example | |---------|-------------|---------| | create | Create new prompt artifact | prompt create --id my-prompt | | validate | Validate prompt structure | prompt validate --prompt-id my-prompt | | run | Execute a prompt | prompt run --prompt-id my-prompt --input key=value | | test | Run test suite | prompt test --prompt-id my-prompt | | version:bump | Bump prompt version | prompt version:bump --prompt-id my-prompt --version 1.0.0 --type minor | | version:show | Show version info | prompt version:show --prompt-id my-prompt --version 1.0.0 | | status | Show prompt status | prompt status --prompt-id my-prompt | | diff | Compare versions | prompt diff --prompt-id my-prompt --current current --compare v1.0.0 | | rollback | Rollback to version | prompt rollback --prompt-id my-prompt --version v1.0.0 | | scan | Auto-detect prompts | prompt scan --recursive | | search | Search prompts | prompt search --query "greeting" | | find | Find by pattern | prompt find --pattern greeting |

🔒 Security

  • ✅ API keys are automatically sanitized from outputs
  • .promptrc.json is in .gitignore by default
  • ✅ Environment variables recommended for production
  • ✅ Warnings when API keys detected in config files

See SECURITY.md for best practices.

📚 Documentation

🤝 Contributing

We welcome contributions! This is a developer-first tool built by developers, for developers.

📄 License

MIT License - see LICENSE file for details.

🔗 Links


Made with ❤️ for developers who care about prompt quality