ai-cost-meter
v1.0.0
Published
Lightweight, zero-dependency LLM API cost & token usage tracker for OpenAI, Anthropic, Gemini, Mistral, Groq, and DeepSeek
Maintainers
Readme
llm-cost-tracker
Lightweight, zero-dependency LLM API cost & token usage tracker for Node.js / TypeScript.
Track every token and dollar spent across OpenAI, Anthropic (Claude), Google Gemini, Mistral, Groq, and DeepSeek — with a single line of code.
✨ Features
- 🎯 6 Providers — OpenAI, Anthropic, Gemini, Mistral, Groq, DeepSeek
- 💰 35+ Models — Built-in pricing database (April 2026 prices)
- 🔍 Auto-Detection — Automatically detects provider from response shape
- 🧩 Fuzzy Matching —
gpt-4o-2024-11-20matchesgpt-4opricing - 📊 Rich Summaries — Aggregated stats by provider, model, time period
- 💾 Pluggable Storage — In-memory, JSON file, or bring your own
- 🔌 Middleware — Drop-in fetch wrapper + Vercel AI SDK integration
- 📤 Export — JSON and CSV export
- 🪶 Zero Dependencies — ~22KB bundled, works everywhere
- 🔒 TypeScript-first — Full type safety, ESM + CJS
📦 Installation
npm install llm-cost-tracker🚀 Quick Start
import { CostTracker } from 'llm-cost-tracker';
const tracker = new CostTracker();
// Track a request
tracker.track('openai', 'gpt-4o', {
inputTokens: 1000,
outputTokens: 500,
});
// Get cost summary
const summary = await tracker.summary();
console.log(`Total cost: $${summary.totalCost}`);
// Total cost: $0.0075📖 Usage
Method 1: Manual Tracking
When you already have token counts from the API response:
const record = tracker.track('openai', 'gpt-4o', {
inputTokens: 1000,
outputTokens: 500,
});
console.log(record.totalCost); // 0.0075
console.log(record.inputCost); // 0.0025
console.log(record.outputCost); // 0.005Method 2: Parse Raw API Response
Pass the raw API response object and let the tracker extract tokens automatically:
// OpenAI
const response = await fetch('https://api.openai.com/v1/chat/completions', { ... });
const body = await response.json();
tracker.trackResponse('openai', body);
// Anthropic
tracker.trackResponse('anthropic', anthropicResponse);
// Auto-detect provider from response shape
tracker.trackResponse(anyResponse); // Works for all providers!Method 3: Wrap Async Calls
Wrap your API call — tracking happens automatically:
const result = await tracker.wrap('openai', () =>
openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
})
);
// result is the normal API response — tracking happened in the background🔌 Middleware
Fetch Wrapper
Intercept all LLM API calls automatically:
import { CostTracker } from 'llm-cost-tracker';
import { createTrackedFetch } from 'llm-cost-tracker';
const tracker = new CostTracker();
const trackedFetch = createTrackedFetch(tracker);
// Use as drop-in replacement for fetch
const response = await trackedFetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: { 'Authorization': `Bearer ${apiKey}` },
body: JSON.stringify({ model: 'gpt-4o', messages }),
});Vercel AI SDK
import { wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
import { CostTracker } from 'llm-cost-tracker';
import { costTrackerMiddleware } from 'llm-cost-tracker/middleware/vercel-ai';
const tracker = new CostTracker();
const model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware: costTrackerMiddleware(tracker),
});
// All calls through this model are now tracked📊 Querying & Reports
Summary
const summary = await tracker.summary();
console.log(summary.totalCost); // Total USD spent
console.log(summary.requestCount); // Number of API calls
console.log(summary.avgCostPerRequest); // Average cost per call
console.log(summary.byProvider); // Breakdown by provider
console.log(summary.byModel); // Breakdown by modelFiltered Queries
import { startOfToday, startOfMonth } from 'llm-cost-tracker';
// Today's costs
const today = await tracker.summary({ from: startOfToday() });
// This month's OpenAI costs
const openai = await tracker.summary({
provider: 'openai',
from: startOfMonth(),
});
// Last 10 records
const recent = await tracker.records({ limit: 10 });Export
// Export as JSON
const json = await tracker.export('json');
// Export as CSV
const csv = await tracker.export('csv');💾 Storage Options
In-Memory (Default)
const tracker = new CostTracker(); // Uses memory by default
const tracker = new CostTracker({ store: 'memory', maxRecords: 50000 });JSON File
const tracker = new CostTracker({
store: 'json-file',
storePath: './costs/llm-usage.jsonl',
});Custom Adapter
const tracker = new CostTracker({
store: {
save: async (record) => { await myDB.insert(record); },
getAll: async (filter) => { return await myDB.query(filter); },
clear: async () => { await myDB.truncate(); },
},
});💲 Pricing
Built-in Models (35+)
| Provider | Models | |:---|:---| | OpenAI | gpt-4o, gpt-4o-mini, o3, o3-mini, o1, gpt-4-turbo, gpt-3.5-turbo | | Anthropic | claude-opus-4, claude-sonnet-4, claude-haiku-4, claude-3.5-sonnet | | Google Gemini | gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite | | Mistral | mistral-large, mistral-medium, mistral-small | | Groq | llama-3.1-8b/70b/405b, mixtral-8x7b, gemma-7b | | DeepSeek | deepseek-v4-flash, deepseek-v4-pro, deepseek-chat |
Custom Pricing
const tracker = new CostTracker({
customPricing: [
{ provider: 'openai', model: 'gpt-5', inputPer1M: 5.00, outputPer1M: 15.00 },
{ provider: 'my-provider', model: 'my-model', inputPer1M: 0.01, outputPer1M: 0.02 },
],
});
// Or add at runtime
tracker.addPricing([
{ provider: 'openai', model: 'gpt-5-turbo', inputPer1M: 3.00, outputPer1M: 10.00 },
]);🔔 Usage Callbacks
const tracker = new CostTracker({
onUsage: (record) => {
console.log(`💰 ${record.provider}/${record.model}: $${record.totalCost}`);
// Send to your monitoring system, Slack webhook, etc.
},
});📐 API Reference
CostTracker
| Method | Description |
|:---|:---|
| track(provider, model, usage, metadata?, latencyMs?) | Track with known token counts |
| trackResponse(provider, response, metadata?, latencyMs?) | Parse raw API response |
| trackResponse(response) | Auto-detect provider and parse |
| wrap(provider, fn, metadata?) | Wrap async call, auto-track |
| summary(filter?) | Get aggregated cost summary |
| records(filter?) | Get raw usage records |
| lastRecord() | Get the last tracked record |
| export('json' \| 'csv', filter?) | Export records |
| reset() | Clear all records |
| addPricing(entries) | Add custom pricing at runtime |
| getPrice(provider, model) | Look up model pricing |
QueryFilter
| Field | Type | Description |
|:---|:---|:---|
| provider | string | Filter by provider |
| model | string | Filter by model |
| from | Date | Records after this date |
| to | Date | Records before this date |
| limit | number | Max records to return |
🤝 Contributing
Contributions are welcome! Please open an issue or submit a pull request.
