promptmeter
v1.0.1
Published
Terminal-based load testing tool with natural language prompts - define complex load test scenarios using plain English
Maintainers
Readme
PromptMeter - LLM powered load testing in plain English
Installation
npm install -g promptmeterQuick Start
1. Set your API key:
# OpenAI
export OPENAI_API_KEY=sk-xxx
# Or Anthropic
export ANTHROPIC_API_KEY=sk-ant-xxx2. Run a load test:
promptmeter "50 users hitting https://api.example.com for 30 seconds"Usage
promptmeter [prompt] [options]
Options:
-k, --api-key <key> API key
-p, --provider <name> openai (default) or anthropic
-m, --model <model> Model name
-o, --output <file> Export results to JSON
--dry-run Parse only, don't executeSave API Key (optional)
promptmeter config --set-key sk-xxx
# or for Anthropic
promptmeter config --set-anthropic-key sk-ant-xxxExample Prompts
# Simple load test
promptmeter "100 users send GET requests to https://api.example.com/users for 30 seconds"
# Conditional test
promptmeter "5 users hit https://google.com. If all return 200, start 10 users on https://youtube.com for 30s"
# Sequential steps
promptmeter "Warm up with 2 users on /health for 5s, then ramp to 50 users on /products for 1 minute"
# Parallel test
promptmeter "Hit /api/users and /api/products simultaneously with 10 users each for 30 seconds"
# POST with body
promptmeter "POST to https://api.example.com/login with {username: 'test', password: 'test'} using 20 users"
# Using Anthropic
promptmeter -p anthropic "10 users GET https://httpbin.org/get for 10s"What You'll See
Real-time terminal dashboard with:
- Active steps & progress
- Success/failure counts (4xx, 5xx, network errors)
- Latency (avg, P95)
- Requests per second
License
MIT
