npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@pippsza/usage-tracker

v1.1.1

Published

AI token usage tracking SDK with automatic cost calculation. Collects data into a central MongoDB for UsageHub dashboard.

Downloads

633

Readme

@pippsza/usage-tracker

AI usage tracking SDK with automatic cost calculation for LLMs, image/video/music generation, transcription, and embeddings.

Collects data into a central MongoDB database for the usage dashboard.

Supported billing models

| Type | Use case | Examples | |------|----------|---------| | per_token | LLM, embedding, image gen (Gemini/Nano Banana) | GPT-4.1, Claude, text-embedding-3-small | | per_minute | Transcription | Whisper, Scribe v2 | | per_character | TTS | ElevenLabs Multilingual v2 | | per_unit | Credits, generations, images | Kie.ai (Veo, Sora, Suno, GPT-Image) |

Installation

pnpm add @pippsza/usage-tracker

Peer dependency: mongoose >= 8.0.0 (install separately if not already in your project).


Quick start

1. Add env variable

USAGE_DATABASE_URI=mongodb://user:pass@host:27017/api_tokens_usage

The URI must point to the same database as the core app (shared usage DB).

2. Create initialization file

// src/lib/tracked-ai.ts
import { createUsageTracker, createTrackedAI } from '@pippsza/usage-tracker'

export const usageTracker = createUsageTracker({
  projectId: 'my-project',
  environment: process.env.NODE_ENV as 'production' | 'development',
  project: {
    name: 'My Project',
  },
})

export const ai = createTrackedAI(usageTracker)
process.on('beforeExit', () => usageTracker.shutdown())

3. Wrap AI calls

See usage examples below for each call type.


Model naming convention

Models are stored in DB without provider prefix. The provider is a separate field.

How it works

The SDK auto-parses model names passed to wrapper functions:

| You pass | Stored model | Stored provider | Stored apiProvider | |----------|---------------|-------------------|---------------------| | 'openai/gpt-4.1' | gpt-4.1 | openai | openrouter | | 'anthropic/claude-sonnet-4' | claude-sonnet-4 | anthropic | openrouter | | 'google/gemini-2.5-flash' | gemini-2.5-flash | google | openrouter | | 'gpt-4.1' (with provider: 'openai') | gpt-4.1 | openai | openai | | 'scribe_v2' (with provider: 'elevenlabs') | scribe_v2 | elevenlabs | elevenlabs |

Rule: If model contains / — it's split into provider + clean name, and apiProvider defaults to 'openrouter'. If no / — provider comes from ctx.provider, and apiProvider defaults to the same provider.

You can always override apiProvider explicitly in the tracking context.

Fields explained

  • provider — who created the model: openai, anthropic, google, deepseek, elevenlabs, kie
  • apiProvider — which API gateway was used: same as provider for direct calls, openrouter for OpenRouter calls
  • model — clean model name without prefix: gpt-4.1, claude-sonnet-4, gemini-2.5-flash

Pricing lookup

Pricing in modelPricing collection also uses clean model names (without prefix). The SDK pricing resolver has backward compatibility: if a model with prefix exists in DB, it will be found too.


Usage: LLM (generateObject / generateText)

Via OpenRouter

import { generateObject } from 'ai'
import { openrouter } from '@/lib/openrouter'
import { ai } from '@/lib/tracked-ai'

const result = await ai.generateObject(
  () => generateObject({ model: openrouter.chat('openai/gpt-4.1'), schema, prompt }),
  'openai/gpt-4.1',  // auto-parsed: model='gpt-4.1', provider='openai', apiProvider='openrouter'
  {
    userId: session.user.id,
    operationType: 'generation',
    feature: 'question-gen',
    user: { email: session.user.email, name: session.user.name },
  },
)

Via direct OpenAI API

import { generateObject } from 'ai'
import { openai } from '@ai-sdk/openai'
import { ai } from '@/lib/tracked-ai'

const result = await ai.generateObject(
  () => generateObject({ model: openai('gpt-4.1'), schema, prompt }),
  'gpt-4.1',  // no prefix — provider taken from ctx
  {
    userId: session.user.id,
    operationType: 'generation',
    provider: 'openai',  // required when model has no prefix
    user: { email: session.user.email, name: session.user.name },
  },
)

Usage: streamText

import { streamText } from 'ai'
import { ai } from '@/lib/tracked-ai'

const startTime = new Date()

const result = streamText({
  model: openrouter.chat('openai/gpt-4.1'),
  messages,
  onFinish: ai.onStreamFinish('openai/gpt-4.1', {
    userId: session.user.id,
    operationType: 'chat',
    user: { email: session.user.email, name: session.user.name },
  }, startTime),
})

Usage: Transcription (per-minute)

import { ai } from '@/lib/tracked-ai'

const result = await ai.transcribe(
  () => elevenlabs.transcribe({ file, model: 'scribe_v2' }),
  'scribe_v2',  // no prefix — provider from ctx
  durationSeconds,
  {
    userId: session.user.id,
    operationType: 'transcription',
    provider: 'elevenlabs',
    mediaType: 'audio',
  },
)

If the duration is only known from the API response (e.g. Vercel AI SDK experimental_transcribe), use usageTracker.record() directly:

import { experimental_transcribe as transcribe } from 'ai'
import { usageTracker } from '@/lib/tracked-ai'

const startTime = new Date()
const transcript = await transcribe({
  model: elevenlabs.transcription('scribe_v2'),
  audio: buffer,
})

usageTracker.record({
  userId: session.user.id,
  provider: 'elevenlabs',
  model: 'scribe_v2',
  unitType: 'minute',
  mediaType: 'audio',
  durationSeconds: transcript.durationInSeconds ?? 0,
  operationType: 'transcription',
  latencyMs: Date.now() - startTime.getTime(),
  status: 'success',
  requestedAt: startTime,
  completedAt: new Date(),
})

Usage: Embedding

import { embed } from 'ai'
import { ai } from '@/lib/tracked-ai'

const result = await ai.embed(
  () => embed({ model: openai.embedding('text-embedding-3-small'), value: text }),
  'text-embedding-3-small',
  {
    userId: session.user.id,
    operationType: 'embedding',
    provider: 'openai',
  },
)

Usage: Image generation (Nano Banana / Gemini)

Nano Banana (Gemini Image) uses per-token billing — same as LLM. Use generateObject wrapper:

import { ai } from '@/lib/tracked-ai'

const result = await ai.generateObject(
  () => generateImage({ model: google('gemini-2.5-flash'), prompt }),
  'gemini-2.5-flash-image',
  {
    userId: session.user.id,
    operationType: 'image-generation',
    mediaType: 'image',
    provider: 'google',
    user: { email: session.user.email },
  },
)

Usage: Video / Music / Image via Kie.ai (per-unit)

Kie.ai is a unified API for video (Veo, Sora, Runway), image (GPT-Image, Flux), and music (Suno) generation. It uses credit-based billing.

import { ai } from '@/lib/tracked-ai'

// Video generation via Kie.ai
const result = await ai.generateMedia(
  () => kieClient.generateVideo({ model: 'veo-3.1-fast', prompt, duration: 8 }),
  'veo-3.1-fast',
  1,  // 1 generation = 1 unit
  {
    userId: session.user.id,
    operationType: 'video-generation',
    mediaType: 'video',
    provider: 'kie',
    unitLabel: 'generation',
  },
  { durationSeconds: 8, resolution: '1080p', format: 'mp4' },  // outputMetadata
)

// Music generation via Kie.ai
const result = await ai.generateMedia(
  () => kieClient.generateMusic({ model: 'suno-v4', prompt, duration: 120 }),
  'suno-v4',
  1,
  {
    userId: session.user.id,
    operationType: 'music-generation',
    mediaType: 'music',
    provider: 'kie',
    unitLabel: 'generation',
  },
  { durationSeconds: 120, format: 'mp3' },
)

Using with OpenRouter

OpenRouter is a single proxy to 100+ LLM models with one API key.

1. Install

pnpm add @ai-sdk/openai

Uses @ai-sdk/openai with a custom baseURL, not a separate OpenRouter SDK.

2. Env variable

OPENROUTER_API_KEY=sk-or-v1-...

3. Initialize provider

// src/lib/openrouter.ts
import { createOpenAI } from '@ai-sdk/openai'

export const openrouter = createOpenAI({
  apiKey: process.env.OPENROUTER_API_KEY,
  baseURL: 'https://openrouter.ai/api/v1',
})

4. Use in AI calls

import { generateObject, streamText } from 'ai'
import { openrouter } from '@/lib/openrouter'
import { ai } from '@/lib/tracked-ai'

// IMPORTANT: use .chat() — OpenRouter only supports Chat Completions API, not Responses API
const result = await ai.generateObject(
  () => generateObject({
    model: openrouter.chat('openai/gpt-4.1'),  // .chat() required!
    schema, prompt,
  }),
  'openai/gpt-4.1',  // SDK auto-parses: model='gpt-4.1', provider='openai', apiProvider='openrouter'
  {
    userId: session.user.id,
    operationType: 'generation',
  },
)

5. How OpenRouter returns data

OpenRouter API response format (GET /api/v1/models):

{
  "data": [
    {
      "id": "openai/gpt-4.1",
      "name": "GPT-4.1",
      "pricing": {
        "prompt": "0.000002",
        "completion": "0.000008",
        "request": "0",
        "image": "0"
      },
      "context_length": 1047576,
      "top_provider": { "context_length": 1047576, "max_completion_tokens": 32768, "is_moderated": true },
      "architecture": { "modality": "text+image->text", "tokenizer": "GPT", "instruct_type": null }
    }
  ]
}

Key points:

  • id is in provider/model format: openai/gpt-4.1, anthropic/claude-sonnet-4
  • pricing.prompt and pricing.completion are USD per 1 token (not per 1M). Multiply by 1,000,000 for our format
  • The architecture.modality field indicates capabilities: text+image->text = supports vision

The SDK strips the provider prefix when storing: openai/gpt-4.1 → model=gpt-4.1, provider=openai.

6. Key differences from direct providers

| Aspect | OpenRouter | Direct SDK (OpenAI) | |--------|-----------|-------------------| | Package | @ai-sdk/openai + custom baseURL | @ai-sdk/openai | | Method | .chat(modelId) | Direct openai(modelId) | | Model names passed to SDK | With prefix: openai/gpt-4.1 | Without prefix: gpt-4.1 | | Stored model name | gpt-4.1 (auto-stripped) | gpt-4.1 (as-is) | | apiProvider | openrouter (auto-detected) | Same as provider | | API | Chat Completions only | Chat Completions + Responses | | Single key | Access to 100+ models | One provider only |


Dynamic model loading

Load available models from the usage DB instead of hardcoding:

import { getAvailableModels } from '@pippsza/usage-tracker'

const models = await getAvailableModels()

// Filter by capabilities
const visionModels = models.filter(m => m.supportsVision)
const reasoningModels = models.filter(m => m.supportsReasoning)
const cheapModels = models
  .filter(m => m.inputPricePerMillionTokens != null && m.inputPricePerMillionTokens < 1)
  .sort((a, b) => (a.inputPricePerMillionTokens ?? 0) - (b.inputPricePerMillionTokens ?? 0))

// Filter by media type
const videoModels = models.filter(m => m.mediaType === 'video')
const musicModels = models.filter(m => m.mediaType === 'music')
const imageModels = models.filter(m => m.mediaType === 'image')

Models are populated via Sync from OpenRouter (for LLMs) or manual entry (for Kie.ai, transcription, TTS models) in the core app dashboard.


What gets tracked

| Field | Source | |-------|--------| | Tokens (input, output, cached, reasoning) | AI SDK usage object | | Units (credits, generations) | units param in generateMedia | | Duration (seconds) | durationSeconds param in transcribe | | Cost (USD) | modelPricing collection (fallback to hardcoded) | | Provider | Auto-parsed from model name or ctx.provider | | API Provider | Auto-detected (openrouter if model has prefix) or ctx.apiProvider | | Media type | mediaType in context | | Output metadata (resolution, format, duration) | outputMetadata param | | Latency | Automatic measurement | | Status (success/error) | Try-catch wrapper | | Project | projectId from config | | User (email, name, role) | TrackingContext.user |


AI SDK compatibility

Supports both Vercel AI SDK v5 and v6 field names:

| Field | v5 (old) | v6 (current) | |-------|----------|--------------| | Input tokens | usage.promptTokens | usage.inputTokens | | Output tokens | usage.completionTokens | usage.outputTokens | | Total tokens | usage.totalTokens | usage.totalTokens |

The wrapper reads v6 names first, falling back to v5 names for backward compatibility.

Important: If you see $0.00 cost in the dashboard but token counts are correct, check that inputTokens and outputTokens are non-zero in the raw events. A mismatch between SDK version and field names causes inputTokens: 0, outputTokens: 0 while totalTokens is populated — resulting in zero cost calculation.


TrackingContext — all fields

{
  userId: string           // Required
  operationType: string    // Required: "chat", "generation", "video-generation", etc.
  feature?: string         // Feature: "question-gen", "search"
  endpoint?: string        // Endpoint: "/api/chat"
  entityType?: string      // Business entity: "test", "document"
  entityId?: string        // Entity ID
  traceId?: string         // For grouping calls (auto-generated)
  promptSummary?: string   // Prompt description (~500 chars)
  mediaType?: MediaType    // "text", "image", "video", "audio", "music"
  provider?: Provider      // "openai", "anthropic", "google", "deepseek", "elevenlabs", "kie"
  apiProvider?: ApiProvider // "openrouter", or same as provider for direct calls
  outputMetadata?: {       // Metadata about generated media
    durationSeconds?: number
    resolution?: string
    format?: string
    fileSize?: number
  }
  user?: {                 // User reference data
    email?: string
    name?: string
    role?: string          // "student", "teacher", "admin"
    avatarUrl?: string
    meta?: Record<string, unknown>
  }
}

Supported providers

type Provider = 'openai' | 'anthropic' | 'google' | 'deepseek' | 'elevenlabs' | 'kie' | 'custom'
type ApiProvider = Provider | 'openrouter'

| Provider | Use case | |----------|----------| | openai | OpenAI models (GPT, DALL-E, Whisper) | | anthropic | Anthropic models (Claude) | | google | Google AI models (Gemini) | | deepseek | DeepSeek models | | elevenlabs | Transcription (Scribe), TTS | | kie | Media generation proxy (video, image, music) | | custom | Any other provider |

| API Provider | Description | |-------------|-------------| | openrouter | OpenRouter gateway — auto-detected when model has provider/ prefix | | Any Provider | Direct API call to that provider |


How it works under the hood

ai.generateObject(fn, 'openai/gpt-4.1', ctx)
    |
    +-- parseModel('openai/gpt-4.1')
    |     -> model: 'gpt-4.1'
    |     -> provider: 'openai'
    |     -> apiProvider: 'openrouter'
    |
    +-- fn() -> original AI call
    +-- tracker.record(event)
    |     +-- maybeSyncUser()
    |
    +-- flush() (every 5s or buffer full)
          +-- loadPricingFromDb() -> prices from DB
          +-- calculateCost() -> per_token / per_minute / per_character / per_unit
          +-- insertMany() -> batch write to tokenUsageEvents
  • Buffer: 50 events or 5 seconds (whichever comes first)
  • Buffer overflow protection: trimmed to 10,000 events if DB is consistently unavailable
  • Cost is calculated at flush time (one pricing load per batch)
  • If pricing DB is unavailable -> fallback pricing (hardcoded)
  • If flush fails -> events are returned to buffer
  • TTL: raw events are deleted after 90 days

Configuration

createUsageTracker({
  projectId: string                    // Unique ID (kebab-case)
  environment: 'production' | 'staging' | 'development'
  buffer?: {
    maxSize?: number                   // Default: 50
    flushIntervalMs?: number           // Default: 5000
  }
  project?: {
    name: string                       // Display name for dashboard
    description?: string
    url?: string
    techStack?: string
    team?: string
    contactEmail?: string
  }
})

Pricing — how cost is calculated

The SDK does not hardcode prices. On each flush it:

  1. Loads current prices from modelPricing collection in MongoDB
  2. If model found in DB -> uses that price
  3. If model not found -> falls back to hardcoded pricing

Pricing lookup with prefix stripping

The pricing resolver tries:

  1. Exact match: gpt-4.1 in DB
  2. Prefix strip: if model has openai/gpt-4.1 format, strips to gpt-4.1 and retries

This means pricing works regardless of whether the DB has gpt-4.1 or openai/gpt-4.1.

Pricing types

| Type | Calculation | Fields in modelPricing | |------|-------------|----------------------| | per_token | tokens / 1M * price | inputPricePerMillionTokens, outputPricePerMillionTokens | | per_minute | seconds / 60 * pricePerMinute | pricePerMinute | | per_character | chars / 1M * pricePerMillionCharacters | pricePerMillionCharacters | | per_unit | units * pricePerUnit | pricePerUnit, unitLabel |

Fallback pricing (when DB is unavailable)

LLMs: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, o3, o3-mini, o4-mini

Embedding: text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002

Transcription: whisper-1, gpt-4o-transcribe, gpt-4o-mini-transcribe, scribe_v2

Media (Kie.ai): veo-3.1-fast, veo-3.1-quality, sora-2-standard, sora-2-pro, gpt-image-1, suno-v4

For any other models (Claude, Gemini LLM, DeepSeek, Llama, etc.) — sync via the core app dashboard or manual pricing entry is required, otherwise cost will be $0.00.


Database schema

tokenUsageEvents (events collection)

| Field | Type | Description | |-------|------|-------------| | model | string | Clean model name: gpt-4.1, claude-sonnet-4 | | provider | string | Model provider: openai, anthropic, google | | apiProvider | string | API gateway: openrouter or same as provider | | unitType | string | token, minute, character, unit | | mediaType | string | text, image, video, audio, music | | inputTokens | number | Input token count | | outputTokens | number | Output token count | | estimatedCostUsd | number | Calculated cost | | userId | string | User who made the request | | projectId | string | Source project | | status | string | success, error, timeout, rate_limited |

modelPricing (pricing collection)

| Field | Type | Description | |-------|------|-------------| | model | string | Clean model name (no prefix): gpt-4.1 | | provider | string | Model provider: openai | | pricingType | string | per_token, per_minute, per_character, per_unit | | inputPricePerMillionTokens | number | Per-token input price | | outputPricePerMillionTokens | number | Per-token output price | | pricePerMinute | number | Per-minute price | | pricePerMillionCharacters | number | Per-character price | | pricePerUnit | number | Per-unit price |


Checklist

  • [ ] @pippsza/usage-tracker installed via pnpm
  • [ ] mongoose in dependencies
  • [ ] USAGE_DATABASE_URI in .env (shared usage DB)
  • [ ] src/lib/tracked-ai.ts created with projectId and project metadata
  • [ ] AI calls wrapped via ai.generateObject() / ai.onStreamFinish() / ai.transcribe() / ai.embed() / ai.generateMedia()
  • [ ] userId passed in every call
  • [ ] provider set in context when model has no prefix (direct API calls)
  • [ ] All models used in the project have pricing in modelPricing (check via getAvailableModels() or the dashboard)
  • [ ] Graceful shutdown: process.on('beforeExit', () => usageTracker.shutdown())
  • [ ] Verified in the usage dashboard

If using OpenRouter (additionally):

  • [ ] @ai-sdk/openai in dependencies
  • [ ] OPENROUTER_API_KEY in .env
  • [ ] Provider initialized with baseURL: 'https://openrouter.ai/api/v1'
  • [ ] All calls via .chat(): openrouter.chat('openai/gpt-4.1')
  • [ ] Model names with provider prefix passed to SDK: 'openai/gpt-4.1' (auto-parsed)
  • [ ] Prices synced via dashboard -> Models & Pricing -> Sync from OpenRouter

If using Kie.ai (additionally):

  • [ ] Kie.ai API key configured
  • [ ] Media calls wrapped via ai.generateMedia()
  • [ ] mediaType set in context ('video', 'image', 'music')
  • [ ] provider: 'kie' set in context
  • [ ] Model pricing added to the dashboard (manually or via future sync)

Exports reference

Functions

import {
  createUsageTracker,    // Create a tracker instance
  createTrackedAI,       // Create AI call wrappers (generateObject, streamText, transcribe, embed, generateMedia)
  getAvailableModels,    // Load active models from DB (standalone, no tracker needed)
  calculateCost,         // Calculate cost for any billing type
  calculateTokenCost,    // Calculate per-token cost
  calculateMinuteCost,   // Calculate per-minute cost
  calculateCharacterCost,// Calculate per-character cost
  calculateUnitCost,     // Calculate per-unit cost
  loadPricingFromDb,     // Load pricing map from DB
  getUsageConnection,    // Get the mongoose connection to the usage DB
  getTokenUsageEventModel, // Mongoose model: tokenUsageEvents
  getModelPricingModel,    // Mongoose model: modelPricing
  getProjectModel,         // Mongoose model: projects
  getUserModel,            // Mongoose model: users
  getUsageSummaryModel,    // Mongoose model: usageSummaries
  getProviderCostModel,    // Mongoose model: providerCosts
} from '@pippsza/usage-tracker'

Types

import type {
  TrackerConfig,    // Config for createUsageTracker()
  UsageEvent,       // Raw event written to DB
  TrackingContext,   // Context passed to ai.generateObject(), etc.
  AvailableModel,   // Model returned by getAvailableModels()
  PricingType,      // 'per_token' | 'per_minute' | 'per_character' | 'per_unit'
  UnitType,         // 'token' | 'minute' | 'character' | 'unit'
  Provider,         // 'openai' | 'anthropic' | 'google' | 'deepseek' | 'elevenlabs' | 'kie' | 'custom'
  ApiProvider,      // Provider | 'openrouter'
  MediaType,        // 'text' | 'image' | 'video' | 'audio' | 'music'
  OutputMetadata,   // { durationSeconds?, resolution?, format?, fileSize? }
  UsageTracker,     // Tracker instance type
} from '@pippsza/usage-tracker'

Mongoose schemas

For building custom queries or extending the dashboard:

import {
  tokenUsageEventSchema,  // Usage events (tokens, cost, latency, user, etc.)
  modelPricingSchema,      // Model pricing records
  usageSummarySchema,      // Aggregated usage summaries
  providerCostSchema,      // Provider-level cost aggregation
  projectSchema,           // Registered projects
  userSchema,              // Synced users
} from '@pippsza/usage-tracker'

Troubleshooting

| Problem | Solution | |---------|----------| | Project not appearing in dashboard | Check USAGE_DATABASE_URI, projectId, [UsageTracker] logs | | Cost $0.00 | Model name must match modelPricing. Verify inputTokens/outputTokens are non-zero (see AI SDK compatibility) | | Events not writing | Check MongoDB user write permissions, ensure tracker.shutdown() on exit | | Tokens correct but cost zero | AI SDK version mismatch — SDK reads inputTokens/outputTokens (v6). Older versions use promptTokens/completionTokens | | Media generation cost $0.00 | Ensure model is in modelPricing with pricingType: 'per_unit' and pricePerUnit set. Check fallback pricing covers the model | | [UsageTracker] No provider specified warning | Set provider in TrackingContext or use provider/model format | | Duplicate models in dashboard | Ensure all models use clean names (no prefix). Run DB migration to strip prefixes |