prompt-tracker
v1.0.0
Published
Track AI conversations in real-time. Measure hallucination risk and context window usage in Windsurf, Cursor, and VS Code.
Maintainers
Readme
prompt-tracker
Know when your AI is about to hallucinate. Before it does.
Track AI conversations in real-time. Measure hallucination risk, context window usage, and session health in Windsurf, Cursor, VS Code Copilot, or any AI coding tool.
The Problem
You start a Windsurf session. It goes great. Then 45 minutes and 30 messages later, the AI starts writing wrong code, forgetting earlier instructions, contradicting itself. You have no idea when it went wrong.
This is context drift — the #1 cause of AI coding mistakes. The context window fills up. The AI starts dropping earlier context. Then it starts guessing. Then it hallucinates.
No tool currently tells you when this is happening. Until now.
Install
npm install prompt-trackerQuick Start
const { PromptTracker } = require('prompt-tracker');
const tracker = new PromptTracker({
maxContextTokens: 8000, // your model's context limit
apiEndpoint: 'http://localhost:8000', // optional: your Prompt Analyzer Pro backend
});
// BEFORE every AI message:
const result = await tracker.trackPrompt(userMessage);
console.log(`Health: ${result.health}`); // 'healthy' | 'warning' | 'critical'
// AFTER AI responds:
tracker.trackResponse(aiResponse, result.turnId);
// Check session health anytime:
const stats = tracker.getStats();
if (stats.hallucinationRisk > 70) {
console.warn('⚠️ High hallucination risk! Start a new session.');
tracker.resetSession();
}The Hallucination Risk Score
A 0-100 score calculated from 4 real factors:
| Factor | Max Points | What It Measures | |--------|-----------|-----------------| | Context window pressure | 40 pts | How full is the context window? | | Complexity drift | 25 pts | Are prompts getting harder over time? | | Topic switching | 20 pts | How many unrelated topic jumps? | | Session duration | 15 pts | How long has this session been running? |
Interpretation:
- 0-30: Safe. AI has good context and focus.
- 30-60: Warning. Verify complex answers.
- 60-100: Critical. AI is likely hallucinating. Start a new session.
API Reference
new PromptTracker(options)
| Option | Default | Description |
|--------|---------|-------------|
| maxContextTokens | 8000 | Context window size for your model |
| alertThreshold | 0.75 | Warn when context is this % full |
| model | 'gpt-4' | Model name for logging |
| apiEndpoint | 'http://localhost:8000' | Prompt Analyzer Pro backend (optional) |
await tracker.trackPrompt(text)
Returns: { turnId, complexity, tokens, cost, health, riskScore, alerts, suggestions }
tracker.trackResponse(text, turnId)
Returns: { sessionStats, hallucinationRisk }
tracker.getStats()
Returns: { sessionId, duration, totalTurns, totalTokens, contextUsedPercent, contextHealth, hallucinationRisk, recommendations }
tracker.calculateHallucinationRisk()
Returns: number (0-100)
tracker.getSuggestions()
Returns: Array<{ type, icon, message, action }>
tracker.resetSession(keepSummary = true)
Returns: { newSessionId, preservedSummary }
Works Without Backend
The backend API is optional. Without it, the tracker uses local token estimation. Context tracking, hallucination risk, and all suggestions still work.
License
MIT
