@llmetrics/sdk
v0.1.3
Published
Official JavaScript/TypeScript SDK for LLMetrics — LLM cost & performance tracking
Maintainers
Readme
@llmetrics/sdk
Official JavaScript / TypeScript SDK for LLMetrics — lightweight LLM cost and performance tracking.
Installation
npm install @llmetrics/sdk
# or
yarn add @llmetrics/sdk
# or
pnpm add @llmetrics/sdkQuick start
import { llmetrics } from '@llmetrics/sdk';
llmetrics.init({
apiKey: process.env.LLMETRICS_API_KEY!,
});
// After any LLM call — fire and forget, batched automatically
const response = await openai.chat.completions.create({ ... });
llmetrics.track({
feature: 'lesson-generation',
provider: 'openai',
model: 'gpt-4o-mini',
inputTokens: response.usage.prompt_tokens,
outputTokens: response.usage.completion_tokens,
});Init options
| Option | Type | Default | Description |
|---|---|---|---|
| apiKey | string | — | Your LLMetrics API key (from the API Keys page in your dashboard) |
| flushIntervalMs | number | 1500 | How often the event queue flushes |
| maxQueueSize | number | 50 | Max events to buffer before forcing a flush |
| timeoutMs | number | 2000 | Request timeout for each flush |
| debug | boolean | false | Log flush errors to console |
Tracking events
llmetrics.track(event) — fire and forget
Adds the event to an internal queue that flushes automatically. Never throws. Ideal for production use.
llmetrics.track({
feature: 'chat', // your feature name — groups events in the dashboard
provider: 'openai', // 'openai' | 'anthropic'
model: 'gpt-4o-mini',
inputTokens: 512,
outputTokens: 128,
userId: 'user_abc123', // optional — track per-user costs
meta: { promptVersion: 2 }, // optional — any extra data
});llmetrics.trackAsync(event) — awaitable
Sends immediately without queuing. Throws on failure. Useful in serverless functions that may not stay alive long enough for the queue to flush.
await llmetrics.trackAsync({
feature: 'summarize',
provider: 'anthropic',
model: 'claude-haiku-4-5',
inputTokens: response.usage.input_tokens,
outputTokens: response.usage.output_tokens,
});llmetrics.flush() — manual flush
Force the queue to send immediately. Call this before your process exits.
await llmetrics.flush();Event fields
| Field | Type | Required | Description |
|---|---|---|---|
| feature | string | ✓ | Logical feature name (e.g. "chat", "summarize") |
| provider | string | ✓ | LLM provider: "openai" or "anthropic" |
| model | string | ✓ | Model ID (e.g. "gpt-4o-mini", "claude-haiku-4-5") |
| inputTokens | number | ✓ | Prompt token count |
| outputTokens | number | ✓ | Completion token count |
| userId | string | — | Your app's user identifier |
| ts | number | — | Unix timestamp in ms (defaults to Date.now()) |
| meta | object | — | Any extra metadata to store with the event |
Examples
OpenAI
import OpenAI from 'openai';
import { llmetrics } from '@llmetrics/sdk';
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: prompt }],
});
llmetrics.track({
feature: 'chat',
provider: 'openai',
model: response.model,
inputTokens: response.usage.prompt_tokens,
outputTokens: response.usage.completion_tokens,
userId: session.userId,
});Anthropic
import Anthropic from '@anthropic-ai/sdk';
import { llmetrics } from '@llmetrics/sdk';
const anthropic = new Anthropic();
const response = await anthropic.messages.create({
model: 'claude-haiku-4-5',
max_tokens: 1024,
messages: [{ role: 'user', content: prompt }],
});
llmetrics.track({
feature: 'summarize',
provider: 'anthropic',
model: response.model,
inputTokens: response.usage.input_tokens,
outputTokens: response.usage.output_tokens,
});License
MIT
