prompt-debug
v1.0.7
Published
π¬ Open-source Prompt Debugger for LLMs β token counter, cost tracker (βΉ/$), streaming, multi-provider (Groq, OpenAI, Together AI)
Downloads
644
Maintainers
Readme
⬑ prompt-debug
π¬ Chrome DevTools, but for AI Prompts β Debug, Analyze & Compare LLM prompts in real-time
π€ The Problem
Every AI developer faces this:
Write prompt β Run β Output is wrong
β
"Why did this happen?" β No idea
β
Guess β Change β Run again β Still wrong
β
Wasted time + Wasted money + Frustration π€There are no proper debugging tools for LLM prompts.
- You don't know how many tokens your prompt uses
- You don't know how much it costs per request
- You can't compare two prompt versions side by side
- You can't track your experiments over a session
prompt-debug solves all of this.
β¨ Features
| Feature | Description | |---|---| | β‘ Real-time Streaming | See responses word-by-word, just like ChatGPT | | π€ Token Counter | Exact input + output token counts per request | | πΈ Cost Tracker | Cost in both βΉ INR and $ USD per request | | β± Latency Tracker | Response time in milliseconds | | β Prompt Compare | A/B test two prompts side by side | | π₯ Prompt Heatmap | Visualize which words have the most impact | | π Session History | All your runs saved with full details | | π Multi-Provider | Groq, OpenAI, Together AI β switch instantly | | π WebSocket | Real-time connection, no page refresh needed | | π Secure | API keys stored locally in browser, never on any server |
π Quick Start
# Install globally
npm install -g prompt-debug
# Start the debugger
prompt-debugBrowser opens automatically at http://localhost:3000 π
That's it. No config files. No setup. Just run and debug.
π¦ Installation
Global (Recommended)
npm install -g prompt-debugLocal (Project-specific)
npm install prompt-debug
npx prompt-debugRequirements
- Node.js >= 18.0.0
- A free API key from any supported provider
π― Usage
# Start on default port 3000
prompt-debug
# Start on custom port
prompt-debug --port 8080
# Start without opening browser
prompt-debug --no-open
# Show version
prompt-debug --version
# Show help
prompt-debug --helpUse as a Node.js Module
import { startServer } from "prompt-debug";
startServer({
port: 3000,
openBrowser: true,
});π Supported Providers
| Provider | Free Tier | Get API Key | |---|---|---| | β‘ Groq | β Yes β Very generous | console.groq.com/keys | | β OpenAI | β Paid | platform.openai.com/api-keys | | ⬑ Together AI | β Yes β $25 free credits | api.together.ai |
π‘ Recommended for beginners: Start with Groq β it's completely free and blazing fast!
π€ Supported Models
Groq (Free β‘)
llama-3.3-70b-versatileβ Best qualityllama-3.1-8b-instantβ Fastestgemma2-9b-itβ Google's Gemmamixtral-8x7b-32768β Great for code
OpenAI
gpt-4oβ Most capablegpt-4o-miniβ Fast & cheapgpt-3.5-turboβ Classico1,o1-miniβ Reasoning models
Together AI (Free tier)
meta-llama/Llama-3-70b-chat-hfmistralai/Mixtral-8x7B-Instruct-v0.1- 100+ open-source models
π Models are auto-fetched from your account when you enter your API key β you always get the latest available models!
π° Cost Tracking
prompt-debug shows you the exact cost of every request:
Prompt: "Explain React hooks in simple terms"
Model: llama-3.3-70b-versatile (Groq)
Input tokens: 12 β βΉ0.0000
Output tokens: 284 β βΉ0.0019
Total cost: βΉ0.0019 ($0.000023)
Latency: 342msMonthly estimate example:
10 users Γ 50 requests/day Γ 30 days = 15,000 requests
Average cost per request: βΉ0.002
Monthly total: βΉ30 (~$0.36)Now you can budget your AI features properly! π‘
ποΈ Architecture
Browser (React UI)
β WebSocket (real-time)
Node.js Backend (Express)
β HTTPS
AI Provider (Groq / OpenAI / Together)Why WebSocket?
| Regular HTTP | WebSocket | |---|---| | Wait for full response | Stream word by word | | One request, one response | Persistent connection | | Slow feel | Real-time feel β |
π Project Structure
prompt-debug/
βββ bin/
β βββ cli.js β CLI entry point (prompt-debug command)
βββ index.js β Server + WebSocket + All providers
βββ package.json
βββ README.mdπ οΈ Development
# Clone the repo
git clone https://github.com/yourusername/prompt-debug
cd prompt-debug
# Install dependencies
npm install
# Start in dev mode (auto-restart on changes)
npm run dev
# Build frontend
cd frontend && npm run buildπ€ Contributing
Contributions are welcome! Here's how:
- Fork the repository
- Create a feature branch β
git checkout -b feature/amazing-feature - Commit your changes β
git commit -m "Add amazing feature" - Push to the branch β
git push origin feature/amazing-feature - Open a Pull Request
Ideas for Contributions
- [ ] Anthropic Claude support
- [ ] Cohere support
- [ ] Export history as CSV/JSON
- [ ] Real token heatmap using logprobs
- [ ] Dark/Light theme toggle
- [ ] Prompt templates library
- [ ] Monthly cost dashboard with charts
- [ ] Docker support
π Bug Reports
Found a bug? Open an issue with:
- Your Node.js version (
node --version) - Your OS (Windows/Mac/Linux)
- Steps to reproduce
- Expected vs actual behavior
π Comparison with Other Tools
| Feature | prompt-debug | LangSmith | OpenAI Playground | Postman | |---|---|---|---|---| | Free | β 100% Free | β Paid | β Limited | β Limited | | Open Source | β | β | β | β | | Groq Support | β | β | β | β | | INR Cost | β | β | β | β | | Streaming | β | β | β | β | | Prompt Compare | β | β | β | β | | Local Install | β | β | β | β | | Setup Time | 30 sec | 10 min | Instant | 5 min |
π Changelog
v1.0.0 (2024)
- π Initial release
- β Groq, OpenAI, Together AI support
- β Real-time WebSocket streaming
- β Token counter + Cost tracker (βΉ + $)
- β Prompt A/B comparison
- β Session history
- β Prompt heatmap
π License
MIT Β© Ankur Ojha
Free to use, modify, and distribute. See LICENSE for details.
β Support
If this project helped you, please consider:
- β Starring the repo on GitHub
- π¦ Sharing on Twitter/LinkedIn
- π Reporting bugs and issues
- π€ Contributing new features
π¨βπ» Author
Built with β€οΈ by Ankur Ojha
"Built an open-source Prompt Debugger for LLMs with real-time token analysis, INR cost tracking, WebSocket streaming, and multi-provider support (Groq/OpenAI/Together AI)"
