context-metry
v1.0.0
Published
Context window intelligence for AI coding agents — token tracking, cost analysis, and optimization for Claude Code, Cursor, and more
Downloads
88
Maintainers
Readme
📊 context-metry
Context window intelligence for AI coding agents. Understand how your context window is being used, how much it's costing you, and how to optimize it. Built for Claude Code, Cursor, Codex, and any agentic coding workflow.

Real-time context intelligence: token tracking, cost analysis, and optimization
✨ What Problem It Solves
When you're coding with Claude Code, Cursor, or any AI coding agent, your context window is your most expensive resource — and right now, you have zero visibility into how it's being used.
- How many tokens is this session consuming?
- Which files are eating the most context budget?
- Am I about to hit the context ceiling?
- Which sessions are costing me the most?
- Why is this session so expensive?
context-metry answers all of these — and gives you concrete, actionable steps to fix them.
🚀 Quick Start
Install
npm install -g context-metryAnalyze a session
# Analyze a single transcript
context-metry analyze ./sessions/session-abc123.jsonl
# Analyze all sessions in a directory
context-metry analyze ~/.claude/projects/my-project/sessions/
# Output as JSON (for scripting)
context-metry analyze ./session.jsonl --format jsonCompare sessions
context-metry diff session-old.jsonl session-new.jsonlCalculate cost
context-metry cost ./sessions/ --model claude-opus-4-6Top consumers
context-metry top ~/.claude/projects/ --limit 20Generate optimization report
context-metry optimize ./session.jsonl -o optimization-report.md🎯 Key Features
📊 Context Health Score (0-100)
Each session gets an automatic health score based on:
- Context utilization efficiency
- Message token efficiency
- Tool call patterns
- Redundant file read detection
- Cost per token optimization
Health Score: [##################..] 91/100 (A)💰 Cost Tracking
- Per-session, per-project, and team-wide cost tracking
- Support for 15+ models including Claude 4, GPT-4o, o3, DeepSeek Coder
- Cost per minute, cost per task, dollar-per-token analysis
| Model | Input $/1M | Output $/1M | |-------|-----------|-------------| | Claude Opus 4 | $15.00 | $75.00 | | Claude Sonnet 4 | $3.00 | $15.00 | | Claude Haiku 4 | $0.80 | $4.00 | | GPT-4o | $2.50 | $10.00 | | o3 | $15.00 | $60.00 |
🔴 Redundancy Detection
"src/large-vendor.js"was read 14 times without modification — ~42K tokens wasted.
context-metry detects:
- Files read repeatedly without being modified
- Tool calls with identical arguments
- Context that could have been summarized
- Large files being read in their entirety
📁 File Intelligence
Every file accessed across all your sessions — ranked by:
- Total token consumption
- Number of reads
- Redundancy score
- Estimated waste
⚡ Context Boundary Warnings
⚠ Context 87% full — 3 more tool calls and you may hit the ceiling.
Add "node_modules" to .claudeignore to recover ~15K tokens.🔧 Optimization Engine
Generates actionable fixes:
.claudeignoreentries to add- Line-range reads for large files (
file.ts:1-100) - Context pruning recommendations
- Estimated token/dollar savings per change
📈 Session Intelligence
- Token burn rate (tokens/minute)
- Peak context moments
- Input vs. output token ratio
- Message efficiency analysis
🏗️ Architecture
Transcript (.jsonl)
│
▼
┌────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Parser │───▶│ Token Engine │───▶│ Analyzer │
│ (Claude/Cursor│ │ (tiktoken/fast) │ │ (health,cost, │
│ Codex, JSONL)│ └──────────────────┘ │ warnings) │
└────────────────┘ └────────┬────────┘
│
┌──────────────────────────────────┼──────────────┐
▼ ▼ ▼
┌──────────┐ ┌────────────┐ ┌──────────┐
│ Store │ │ CLI │ │Dashboard │
│ (SQLite) │ │(Commander) │ │(Express+ │
└──────────┘ └────────────┘ │ WebSocket)│
└────────────┘📦 Supported Formats
| Agent | Format | Status |
|-------|--------|--------|
| Claude Code | ~/.claude/projects/*/sessions/*/transcript.jsonl | ✅ Supported |
| Cursor | Export from Cursor settings (JSONL) | ✅ Supported |
| Codex / OpenCode | Standard JSONL | ✅ Supported |
| Generic | Any .jsonl with role + content fields | ✅ Supported |
📦 CLI Reference
context-metry [command] [path] [options]
Commands:
analyze <path> Analyze transcript(s) — single file or directory
diff <fileA> <fileB> Compare two sessions
cost <path> Calculate cost across sessions
top <path> Show top token consumers
optimize <path> Generate optimization suggestions
Options:
-m, --model Override model (e.g. claude-opus-4-6, gpt-4o)
--format Output format: console | json | html (default: console)
--limit <n> Limit results (default: 10)
-o, --output Output file pathExamples
# Full session analysis
context-metry analyze ./my-session.jsonl
# All sessions in a project
context-metry analyze ~/.claude/projects/my-app/sessions/
# Compare before/after optimization
context-metry diff before.jsonl after.jsonl
# Team-wide cost report
context-metry cost ./team-sessions/ --model claude-opus-4-6
# Find biggest context hogs
context-metry top ./sessions/ --limit 20
# Markdown optimization report
context-metry optimize ./session.jsonl -o report.md
# JSON output for scripting
context-metry analyze ./session.jsonl --format json | jq '.metrics.totalCost'🔧 Configuration
Model Pricing
# Override default pricing with environment variables
export CONTEXT_METRY_MODEL=claude-opus-4-6Custom Pricing (via config file)
Create ~/.context-metry/config.json:
{
"defaultModel": "claude-sonnet-4-6",
"models": {
"my-custom-model": {
"inputPer1M": 1.50,
"outputPer1M": 5.00,
"encoding": "cl100k_base"
}
},
"ignorePatterns": [
"**/node_modules/**",
"**/dist/**",
"**/.git/**"
]
}🧩 Programmatic API
import { parseTranscript } from 'context-metry/parser';
import { analyzeSession } from 'context-metry/analyzer';
import { getModelPricing } from 'context-metry/types';
const transcript = await readFile('session.jsonl', 'utf-8');
const parsed = parseTranscript(transcript);
const result = analyzeSession(parsed, { modelOverride: 'claude-sonnet-4-6' });
console.log(`Health: ${result.health.score}/100`);
console.log(`Cost: $${result.metrics.totalCost.toFixed(4)}`);
console.log(`Top issue: ${result.health.topIssues[0]?.message}`);🐳 Docker
docker run --rm -v $HOME/.claude:/data \
sergiupogor/context-metry:latest \
analyze /data/projects/my-project/sessions/🧪 Running the Dashboard
The web dashboard provides a visual interface for exploring session data:
# Analyze sessions and serve the dashboard
context-metry analyze ./sessions/ --serve
# Or run dashboard directly
npx context-metry dashboard🤔 FAQ
How accurate is token estimation?
context-metry uses a calibrated word/character-ratio algorithm that achieves ±5% accuracy for mixed code/natural-language content. When tiktoken is installed, it uses exact encodings for cl100k_base and o200k_base models.
Does it work with Claude Code Team?
Yes — export your session transcripts and point context-metry at them. Support for direct API ingestion is on the roadmap.
Will it slow down my sessions?
No — context-metry is purely offline analysis. It reads transcript files after-the-fact and never touches your live coding sessions.
How do I find my transcript files?
Claude Code stores them at:
~/.claude/projects/{project-id}/sessions/{session-id}/transcript.jsonl🤝 Contributing
Contributions are welcome! Please read our Contributing Guide before submitting PRs.
git clone https://github.com/SergiuPogor/context-metry.git
cd context-metry
npm install
npm run dev📄 License
MIT © 2026 Sergiu Pogor
If context-metry helped you understand your AI coding costs, star it.
