@tokentracker/ai-token-tracker
v1.0.5
Published
Lightweight SDK to track AI usage and cost in your SaaS product.
Downloads
23
Readme
@tokentracker/ai-token-tracker
Track your AI usage with two lines of code: one at the top, one at the bottom.
Middleware adapters (one‑liner)
Wrap your SDK client once; all calls are automatically tracked.
OpenAI example:
import OpenAI from "openai";
import { trackOpenAIChat } from "@tokentracker/ai-token-tracker/client";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Wrap the OpenAI client with TokenTrackr’s middleware adapter
const client = trackOpenAIChat(openai);
// Send a simple prompt — TokenTrackr automatically tracks it
const response = await client.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
console.log(response.choices[0].message.content);Available adapters (same pattern):
- OpenAI:
trackOpenAIChat(openai) - Anthropic:
trackAnthropicMessages(anthropic) - Google Generative AI:
trackGoogleGenerativeModel(model, { modelName }) - Mistral:
trackMistralChat(mistral) - Groq:
trackGroqChat(groq) - Azure OpenAI:
trackAzureOpenAIChat(azureOpenAI) - Cohere:
trackCohereChat(cohere) - AWS Bedrock:
trackBedrockConverse(bedrock)
Images and audio adapters:
- OpenAI Images:
trackOpenAIImages(openai).generate({ model, prompt }) - OpenAI Audio (TTS):
trackOpenAIAudioSpeech(openai).create({ model, input }) - OpenAI Audio (Transcriptions):
trackOpenAIAudioTranscriptions(openai).create({ model, file }) - OpenAI Audio (Translations):
trackOpenAIAudioTranslations(openai).create({ model, file }) - Azure OpenAI Images:
trackAzureOpenAIImages(azureOpenAI).generate({ model, prompt }) - Azure OpenAI Audio (TTS):
trackAzureOpenAIAudioSpeech(azureOpenAI).create({ model, input }) - Azure OpenAI Audio (Transcriptions):
trackAzureOpenAIAudioTranscriptions(azureOpenAI).create({ model, file }) - Azure OpenAI Audio (Translations):
trackAzureOpenAIAudioTranslations(azureOpenAI).create({ model, file }) - Google Images:
trackGoogleImages(model, { modelName }).generate({ contents }) - AWS Bedrock Images:
trackBedrockImages(bedrock).generate({ modelId, messages }) - AWS Bedrock Audio (Transcriptions):
trackBedrockAudioTranscriptions(bedrock).create({ modelId, messages }) - AWS Bedrock Audio (Translations):
trackBedrockAudioTranslations(bedrock).create({ modelId, messages }) - AWS Bedrock Audio (TTS):
trackBedrockAudioSpeech(bedrock).create({ modelId, messages })
One‑liner wrappers (wrap-and-forget)
Drop-in helpers that automatically track requests and errors for popular providers. Replace your client once; all calls are tracked.
OpenAI:
import OpenAI from 'openai';
import { trackOpenAIChat } from '@tokentracker/ai-token-tracker/client';
const openai = new OpenAI({ apiKey: process.env.OPENAI_KEY });
const client = trackOpenAIChat(openai);
const res = await client.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a joke' }],
});Anthropic:
import Anthropic from '@anthropic-ai/sdk';
import { trackAnthropicMessages } from '@tokentracker/ai-token-tracker/client';
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_KEY });
const client = trackAnthropicMessages(anthropic);
const res = await client.create({ model: 'claude-3-5-sonnet', messages: [{ role: 'user', content: 'Summarize this' }] });Google Generative AI:
import { GoogleGenerativeAI } from '@google/generative-ai';
import { trackGoogleGenerativeModel } from '@tokentracker/ai-token-tracker/client';
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
const model = genAI.getGenerativeModel({ model: 'gemini-1.5-pro' });
const wrapped = trackGoogleGenerativeModel(model, { modelName: 'gemini-1.5-pro' });
const res = await wrapped.generateContent({ contents: [{ role: 'user', parts: [{ text: 'Write a poem' }] }] });Mistral / Groq / Azure OpenAI (OpenAI-style):
import { trackMistralChat, trackGroqChat, trackAzureOpenAIChat } from '@tokentracker/ai-token-tracker/client';
// const client = trackMistralChat(mistral) | trackGroqChat(groq) | trackAzureOpenAIChat(azureOpenAI);Cohere:
import { trackCohereChat } from '@tokentracker/ai-token-tracker/client';
// const client = trackCohereChat(cohere);
// await client.chat({ model: 'command-r', messages: [{ role: 'user', content: '...' }] });AWS Bedrock (converse):
import { trackBedrockConverse } from '@tokentracker/ai-token-tracker/client';
// const client = trackBedrockConverse(bedrock);
// await client.converse({ modelId: 'anthropic.claude-3-5-sonnet-20241022-v2:0', messages: [...] });Images and audio examples
OpenAI Images:
import OpenAI from 'openai';
import { trackOpenAIImages } from '@tokentracker/ai-token-tracker/client';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const images = trackOpenAIImages(openai);
const res = await images.generate({ model: 'gpt-image-1', prompt: 'A cat in space' });OpenAI Audio (TTS):
import { trackOpenAIAudioSpeech } from '@tokentracker/ai-token-tracker/client';
const audio = trackOpenAIAudioSpeech(openai);
const res = await audio.create({ model: 'gpt-4o-mini-tts', input: 'Hello' });OpenAI Audio (Transcriptions):
import { trackOpenAIAudioTranscriptions } from '@tokentracker/ai-token-tracker/client';
const transcriptions = trackOpenAIAudioTranscriptions(openai);
const res = await transcriptions.create({ model: 'whisper-1', file });Azure OpenAI Images:
import { trackAzureOpenAIImages } from '@tokentracker/ai-token-tracker/client';
const aimg = trackAzureOpenAIImages(azureOpenAI);
const res = await aimg.generate({ model: process.env.AZURE_OPENAI_IMAGE_DEPLOYMENT!, prompt: 'A cat in space' });Google Images (via generateContent):
import { GoogleGenerativeAI } from '@google/generative-ai';
import { trackGoogleImages } from '@tokentracker/ai-token-tracker/client';
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
const model = genAI.getGenerativeModel({ model: 'gemini-1.5-pro' });
const wrapped = trackGoogleImages(model, { modelName: 'gemini-1.5-pro' });
const res = await wrapped.generate({ contents: [{ role: 'user', parts: [{ text: 'Generate an image of a cat in space' }] }] });AWS Bedrock Images:
import { trackBedrockImages } from '@tokentracker/ai-token-tracker/client';
const images = trackBedrockImages(bedrock);
const res = await images.generate({ modelId: process.env.BEDROCK_MODEL_ID!, messages: [{ role: 'user', content: 'Create an image of a cat in space' }] });AWS Bedrock Audio (Transcriptions):
import { trackBedrockAudioTranscriptions } from '@tokentracker/ai-token-tracker/client';
const trans = trackBedrockAudioTranscriptions(bedrock);
const res = await trans.create({ modelId: process.env.BEDROCK_MODEL_ID!, messages: [{ role: 'user', content: 'Transcribe attached audio' }] });Quickstart (copy/paste)
- Install
npm install @tokentracker/ai-token-tracker- Configure environment (real values)
AI_TRACKER_ENDPOINT=https://tokentracker-7tu3.onrender.com/track
AI_TRACKER_API_KEY=<YOUR_SERVER_TOKEN>If using dotenv, load early in your app entry:
import 'dotenv/config';- Add two lines around your AI call
import { beginTrack } from '@tokentracker/ai-token-tracker/client';
// TOP: REQUIRED — capture provider, model, endpoint, and your real prompt
const done = beginTrack({
provider: 'your-provider', // REQUIRED e.g. 'openai', 'anthropic', 'google', 'mistral', ...
model: 'your-model', // REQUIRED
endpoint: 'chat.completions', // REQUIRED
prompt, // REQUIRED string OR [{ role, content }]
});
// ... your existing AI API call ...
// Example (OpenAI-style):
const res = await client.chat.completions.create({ model: 'gpt-4o', messages });
// BOTTOM: REQUIRED — pass what you want tracked (placeholders from response)
await done({
http_status: resStatus /* e.g., transport status from your SDK */, // REQUIRED
input_tokens: res?.usage?.prompt_tokens, // REQUIRED (or provide total_tokens)
output_tokens: res?.usage?.completion_tokens, // REQUIRED (or provide total_tokens)
total_tokens: res?.usage?.total_tokens, // REQUIRED if you didn't set both input/output
response: res, // REQUIRED
// Extras you can also set:
// retry_count, response_size_bytes, latency_first_token_ms,
// temperature, max_tokens, error_type, error_message_snippet, etc.
});That’s it. The SDK posts JSON to AI_TRACKER_ENDPOINT with your data. Provide as much as you have; missing fields are allowed. The SDK stamps timestamps and latency automatically and will infer success from http_status if provided.
Note: If you do not configure an endpoint, the SDK defaults to https://tokentracker-7tu3.onrender.com/track.
Recommended fields
- Top:
provider,model,endpoint(recommended; stored as-is) - Prompt:
prompt(string or messages array) — optional - Bottom:
http_status(optional),input_tokens/output_tokensand/ortotal_tokens(optional),response(optional) - Extras (all optional):
retry_count,response_size_bytes,latency_first_token_ms,temperature,max_tokens,error_type,error_message_snippet, etc.
What the SDK adds automatically (no other inference):
timestamp_start(whenbeginTrackruns)timestamp_end(whendoneruns)latency_ms(end - start)
Works with any provider
beginTrack({ provider: 'openai', model: '<model>', endpoint: '<endpoint>', prompt })
beginTrack({ provider: 'anthropic', model: '<model>', endpoint: '<endpoint>', prompt })
beginTrack({ provider: 'google', model: '<model>', endpoint: '<endpoint>', prompt })
beginTrack({ provider: 'mistral', model: '<model>', endpoint: '<endpoint>', prompt })
beginTrack({ provider: 'groq', model: '<model>', endpoint: '<endpoint>', prompt })
beginTrack({ provider: 'cohere', model: '<model>', endpoint: '<endpoint>', prompt })
beginTrack({ provider: 'azure-openai', model: '<model>', endpoint: '<endpoint>', prompt })
beginTrack({ provider: 'aws-bedrock', model: '<model>', endpoint: '<endpoint>', prompt })Minimal examples
- String prompt (OpenAI-style placeholders)
const done = beginTrack({ provider: 'openai', prompt: 'Write a haiku about the ocean' });
const res = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Write a haiku about the ocean' }],
});
await done({
http_status: resStatus, // REQUIRED
input_tokens: res?.usage?.prompt_tokens,
output_tokens: res?.usage?.completion_tokens,
total_tokens: res?.usage?.total_tokens, // REQUIRED if you didn't set both input/output
response: res, // REQUIRED
});- Generic fetch (any provider)
const done = beginTrack({ provider: 'any-provider', model: 'your-model', endpoint: '/v1/whatever', prompt });
const resp = await fetch('https://provider.example.com/v1/whatever', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ model: 'your-model', prompt }),
});
const http_status = resp.status;
const json = await resp.json();
await done({
http_status, // REQUIRED
// usage fields: REQUIRED (set input+output, or set total_tokens)
input_tokens: json?.usage?.input_tokens,
output_tokens: json?.usage?.output_tokens,
total_tokens: json?.usage?.total_tokens,
response: json, // REQUIRED
});- Messages prompt (Anthropic-style placeholders)
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Summarize the following text: ...' },
];
const done = beginTrack({ provider: 'anthropic', model: 'claude-3-5-sonnet', endpoint: 'messages.create', prompt: messages });
const res = await anthropic.messages.create({ model: 'claude-3-5-sonnet', messages });
await done({
http_status: resStatus, // REQUIRED
input_tokens: res?.usage?.input_tokens,
output_tokens: res?.usage?.output_tokens,
response: res, // REQUIRED
});Payload fields (overview)
You must explicitly provide every field you want tracked. The event supports:
- Identifiers and timing:
request_id?,timestamp_start,timestamp_end?,latency_ms? - Provider info:
provider,model?,endpoint? - Token usage:
input_tokens?,output_tokens?,total_tokens? - HTTP/result:
http_status?,success?,retry_count? - Errors:
error_type?,error_message_snippet? - Sizes/latency:
response_size_bytes?,latency_first_token_ms? - Content:
prompt?(string or messages),response?(any),temperature?,max_tokens?
Optional: manual configuration
import { configureTracker } from '@tokentracker/ai-token-tracker/client';
configureTracker({ endpoint: 'https://tokentracker-7tu3.onrender.com/track', apiKey: '<YOUR_API_KEY>' });Requirements
- Node >= 18.17 (uses global fetch)
Module usage (ESM and CommonJS)
Use the client entry for convenience. Both ESM and CommonJS are supported.
- ESM (Node ESM / bundlers):
import { beginTrack, configureTracker } from '@tokentracker/ai-token-tracker/client';- CommonJS (require):
const { beginTrack, configureTracker } = require('@tokentracker/ai-token-tracker/client');You can also import from the root package if you prefer:
- ESM:
import { beginTrack, configureTracker } from '@tokentracker/ai-token-tracker';- CommonJS:
const { beginTrack, configureTracker } = require('@tokentracker/ai-token-tracker');Security note
If you need to redact sensitive content, scrub it before passing prompt/response into beginTrack/done.
Developer notes (temporary)
- We will fully rewrite and streamline this README after all features are implemented.
- Keep wrapper examples concise; expand provider-specific guidance later.
- Ensure no mock data or fake endpoints are shown; only real shapes and notes.
Server configuration (env vars)
Set these on your Render service (server is not published to npm):
- SUPABASE_URL: Your Supabase project URL
- SUPABASE_SERVICE_ROLE_KEY: Supabase service role key
- SUPABASE_API_KEYS_TABLE: Table storing API keys (default: api_keys)
- SUPABASE_API_KEYS_TOKEN_COLUMN: Column holding the key/token (default: key)
- SUPABASE_API_KEYS_USER_COLUMN: Column with the user id (default: user_id)
- SUPABASE_API_KEYS_REVOKED_COLUMN: Column indicating revocation (default: revoked, server enforces revoked=false)
- SUPABASE_AI_EVENTS_TABLE: Destination table for events (default: ai_events)
- MAX_BODY_BYTES: Max request body size in bytes (default: 2097152)
- PORT: Server port (default: 8080)
License
- SDK (
src/and the published npm package@tokentracker/ai-token-tracker): MIT. SeeLICENSE. - Server (
server/): Proprietary. Seeserver/LICENSE. The server code is not open-source and may not be copied, modified, or redistributed without a commercial license.
For commercial use of the server as part of the ai-token-tracker service, contact the copyright holder for licensing terms.
