promptfoo
v0.119.14
Published
LLM eval & testing toolkit
Keywords
Readme
Promptfoo: LLM evals & red teaming
Quick Start
# Install and initialize project
npx promptfoo@latest init
# Run your first evaluation
npx promptfoo evalSee Getting Started (evals) or Red Teaming (vulnerability scanning) for more.
What can you do with Promptfoo?
- Test your prompts and models with automated evaluations
- Secure your LLM apps with red teaming and vulnerability scanning
- Compare models side-by-side (OpenAI, Anthropic, Azure, Bedrock, Ollama, and more)
- Automate checks in CI/CD
- Review pull requests for LLM-related security and compliance issues with code scanning
- Share results with your team
Here's what it looks like in action:

It works on the command line too:
It also can generate security vulnerability reports:

Why Promptfoo?
- 🚀 Developer-first: Fast, with features like live reload and caching
- 🔒 Private: LLM evals run 100% locally - your prompts never leave your machine
- 🔧 Flexible: Works with any LLM API or programming language
- 💪 Battle-tested: Powers LLM apps serving 10M+ users in production
- 📊 Data-driven: Make decisions based on metrics, not gut feel
- 🤝 Open source: MIT licensed, with an active community
Learn More
- 📚 Full Documentation
- 🔐 Red Teaming Guide
- 🎯 Getting Started
- 💻 CLI Usage
- 📦 Node.js Package
- 🤖 Supported Models
- 🔬 Code Scanning Guide
Contributing
We welcome contributions! Check out our contributing guide to get started.
Join our Discord community for help and discussion.
