llm-cost-tracker
v1.0.2
Published
Professional NPM package for tracking LLM API costs, managing budgets, and monitoring usage across multiple providers
Maintainers
Readme
LLM Cost Tracker
A professional TypeScript library for tracking and managing costs across multiple LLM providers (OpenAI, Anthropic, Google, etc.). Monitor your AI spending, set budgets, and receive alerts when thresholds are reached.
Features
- 🎯 Multi-Provider Support - Track costs across OpenAI, Anthropic, Google, and custom providers
- 💰 Budget Management - Set spending limits with customizable threshold alerts
- 📊 Detailed Analytics - Get comprehensive cost summaries by provider, model, and time period
- 🔔 Real-time Alerts - Receive callbacks when budget thresholds are reached or exceeded
- 🚀 REST API Server - Optional HTTP server for remote cost tracking
- 💾 Flexible Storage - In-memory storage included, easily extend with your own
- 🎨 Custom Pricing - Override default pricing for custom models or special rates
- 📦 Zero Dependencies - Lightweight with minimal production dependencies
- 🔒 Type-Safe - Full TypeScript support with comprehensive type definitions
Installation
npm install llm-cost-trackerQuick Start
Basic Usage
import {
CostTracker,
InMemoryStorage,
PriceRegistry,
BudgetManager,
} from "llm-cost-tracker";
// Initialize components
const storage = new InMemoryStorage();
const priceRegistry = new PriceRegistry();
const budgetManager = new BudgetManager(storage, {
limitUSD: 100.0,
thresholdPercent: 80,
});
// Create tracker
const tracker = new CostTracker(storage, priceRegistry, budgetManager);
// Track a cost
const event = await tracker.track({
provider: "openai",
model: "gpt-4",
inputTokens: 1000,
outputTokens: 500,
});
console.log(`Cost: $${event.totalCostUSD}`);
// Get summary
const summary = await tracker.getSummary();
console.log(`Total spent: $${summary.totalCostUSD}`);
console.log(`Total events: ${summary.eventCount}`);With Budget Alerts
import { BudgetManager } from "llm-cost-tracker";
const budgetManager = new BudgetManager(storage, {
limitUSD: 100.0,
thresholdPercent: 80,
});
// Set up alert callbacks
budgetManager.onThreshold((status) => {
console.log(`⚠️ Warning: ${status.percentUsed}% of budget used!`);
console.log(`Spent: $${status.usedUSD} / $${status.limitUSD}`);
});
budgetManager.onExceeded((status) => {
console.log(`🚨 Budget exceeded! Spent: $${status.usedUSD}`);
// Take action: disable API calls, send notifications, etc.
});Using the MCP Server
import { MCPServer } from "llm-cost-tracker/server";
const server = new MCPServer({
port: 3000,
host: "localhost",
costTracker: tracker,
budgetManager: budgetManager,
});
await server.start();
console.log("MCP Server running on http://localhost:3000");API Endpoints
Track Cost
POST /cost.track
Content-Type: application/json
{
"provider": "openai",
"model": "gpt-4",
"inputTokens": 1000,
"outputTokens": 500
}Get Summary
GET /cost.summaryGet Budget Status
GET /budget.statusSet Budget
POST /budget.set
Content-Type: application/json
{
"limitUSD": 100.0,
"thresholdPercent": 80
}Reset Data
POST /cost.resetSupported Providers & Models
OpenAI
gpt-4- $0.03/1K input, $0.06/1K outputgpt-4-turbo- $0.01/1K input, $0.03/1K outputgpt-3.5-turbo- $0.0005/1K input, $0.0015/1K output
Anthropic
claude-3-opus- $0.015/1K input, $0.075/1K outputclaude-3-sonnet- $0.003/1K input, $0.015/1K outputclaude-3-haiku- $0.00025/1K input, $0.00125/1K output
gemini-pro- $0.00025/1K input, $0.0005/1K outputgemini-pro-vision- $0.00025/1K input, $0.0005/1K output
Advanced Usage
Custom Pricing
import { PriceRegistry } from "llm-cost-tracker";
const priceRegistry = new PriceRegistry();
// Add custom pricing for a model
priceRegistry.register("openai", "gpt-4-custom", {
inputPricePerThousand: 0.025,
outputPricePerThousand: 0.05,
});
// Use it
const event = await tracker.track({
provider: "openai",
model: "gpt-4-custom",
inputTokens: 1000,
outputTokens: 500,
});Namespaces for Multi-Tenant Applications
// Track costs for different users/projects
await tracker.track({
provider: "openai",
model: "gpt-4",
inputTokens: 1000,
outputTokens: 500,
namespace: "user-123",
});
// Get summary for specific namespace
const userSummary = await tracker.getSummary("user-123");Cost Simulation (No Persistence)
// Calculate cost without saving to storage
const simulatedEvent = tracker.simulateCost({
provider: "openai",
model: "gpt-4",
inputTokens: 1000,
outputTokens: 500,
});
console.log(`Estimated cost: $${simulatedEvent.totalCostUSD}`);Custom Storage Provider
import { StorageProvider, CostEvent } from "llm-cost-tracker";
class DatabaseStorage implements StorageProvider {
async save(event: CostEvent): Promise<void> {
// Save to your database
}
async loadAll(namespace?: string): Promise<CostEvent[]> {
// Load from your database
}
async reset(namespace?: string): Promise<void> {
// Clear your database
}
}
const storage = new DatabaseStorage();
const tracker = new CostTracker(storage, priceRegistry);API Reference
CostTracker
track(request: TrackingRequest): Promise<CostEvent>
Track a new LLM API call and return the cost event.
Parameters:
provider(string) - Provider name (e.g., 'openai', 'anthropic')model(string) - Model name (e.g., 'gpt-4', 'claude-3-opus')inputTokens(number) - Number of input tokensoutputTokens(number) - Number of output tokensnamespace(string, optional) - Namespace for multi-tenant tracking
Returns: CostEvent with calculated costs
simulateCost(request: TrackingRequest): CostEvent
Calculate cost without persisting to storage.
getSummary(namespace?: string): Promise<CostSummary>
Get aggregated cost summary with breakdowns by provider and model.
reset(namespace?: string): Promise<void>
Clear all tracking data.
BudgetManager
constructor(storage: StorageProvider, config: BudgetConfig)
Create a new budget manager.
Config Options:
limitUSD(number) - Maximum spending limit in USDthresholdPercent(number) - Percentage threshold for warnings (0-100)resetInterval(string, optional) - 'daily', 'weekly', 'monthly', or 'manual'namespace(string, optional) - Namespace to track
onThreshold(callback: BudgetCallback): void
Set callback for threshold warnings.
onExceeded(callback: BudgetCallback): void
Set callback for limit exceeded.
checkBudget(): Promise<BudgetStatus>
Check current spending against budget and trigger callbacks.
getStatus(): Promise<BudgetStatus>
Get current budget status without triggering callbacks.
updateConfig(config: Partial<BudgetConfig>): void
Update budget configuration.
reset(): Promise<void>
Reset budget usage data.
PriceRegistry
register(provider: string, model: string, pricing: ModelPricing): void
Register custom pricing for a model.
getPricing(provider: string, model: string): ModelPricing
Get pricing for a specific provider and model.
calculateCost(tokens: number, pricePerThousand: number): number
Calculate cost for a given number of tokens.
Types
interface CostEvent {
id: string;
timestamp: Date;
provider: string;
model: string;
inputTokens: number;
outputTokens: number;
inputCostUSD: number;
outputCostUSD: number;
totalCostUSD: number;
namespace?: string;
}
interface CostSummary {
totalCostUSD: number;
totalInputTokens: number;
totalOutputTokens: number;
eventCount: number;
byProvider: Record<string, ProviderSummary>;
byModel: Record<string, ModelSummary>;
}
interface BudgetStatus {
limitUSD: number;
usedUSD: number;
percentUsed: number;
remainingUSD: number;
thresholdReached: boolean;
limitExceeded: boolean;
resetInterval: string;
nextResetDate?: Date;
}Examples
Check out the examples directory for complete working examples:
- library-usage.ts - Basic library usage
- budget-alerts.ts - Budget management with alerts
- custom-pricing.ts - Custom pricing configuration
- microservice.ts - MCP Server deployment
Testing
# Run all tests
npm test
# Run tests with coverage
npm run test:coverage
# Run tests in watch mode
npm run test:watchBuilding
npm run buildContributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT © Dominique Houessou
Support
- 📧 Email: [email protected]
- 🐛 Issues: GitHub Issues
Changelog
1.0.0 (2024-11-20)
Initial release with:
- Multi-provider cost tracking (OpenAI, Anthropic, Google)
- Budget management with threshold alerts
- In-memory storage implementation
- REST API server (MCP Server)
- Custom pricing support
- Namespace support for multi-tenant applications
- Comprehensive test coverage (92%+)
- Full TypeScript support
