npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@llmtracer/sdk

v2.0.0

Published

See where your AI budget goes. Lightweight LLM cost tracking SDK.

Readme

@llmtracer/sdk

See where your AI budget goes. Lightweight LLM cost tracking SDK for OpenAI.

Wrap your OpenAI client in two lines and get automatic tracking of every API call -- tokens, latency, cost, and model usage -- with zero changes to your application code.

Install

npm install @llmtracer/sdk

Quickstart

import { LLMTracer } from "@llmtracer/sdk";
import OpenAI from "openai";

const tracer = new LLMTracer({
  apiKey: process.env.LLMTRACER_KEY,
});

const openai = new OpenAI();

// 2 lines -- that's it
tracer.instrumentOpenAI(openai, {
  tags: { feature: "customer-support-bot", env: "production" },
});

// Every OpenAI call is now automatically tracked
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
});

// In serverless (Lambda, Cloud Functions), flush before returning
await tracer.flush();

Tagging Guide

Tags let you slice costs by any dimension in the dashboard. Global tags (set in the constructor) apply to all calls. Per-call tags (set via the llmtracer property) override globals for that specific call.

Track cost by feature

await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [...],
  llmtracer: { tags: { feature: "chat" } }
});

Track cost by user (for B2B apps)

await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [...],
  llmtracer: { tags: { feature: "chat", user_id: req.user.id } }
});

Track cost by customer/tenant

await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [...],
  llmtracer: { tags: { customer: req.tenant.name, feature: "search" } }
});

Track cost by conversation

await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [...],
  llmtracer: { tags: { conversation_id: sessionId, feature: "chat" } }
});

Track environment (global tag)

const tracer = new LLMTracer({
  apiKey: "lt_...",
});

tracer.instrumentOpenAI(openai, {
  tags: { env: process.env.NODE_ENV }  // applies to all calls
});

Tags appear in the dashboard's Breakdown page and Top Tags card. Use them to answer questions like "which customer costs the most?" or "which feature should I optimize?"

Serverless Usage

In environments like AWS Lambda or Google Cloud Functions, call flush() before your function returns to ensure all events are sent:

export async function handler(event) {
  const response = await openai.chat.completions.create({ ... });
  await tracer.flush();
  return response;
}

Agentic Workflow Tracking

Group related LLM calls into traces with named phases:

await tracer.trace("user-request-123", async (t) => {
  await t.phase("planning", async () => {
    await openai.chat.completions.create({ ... });
  });

  await t.phase("execution", async () => {
    await openai.chat.completions.create({ ... });
  });
});

Streaming Support

Streaming calls are instrumented automatically. Token counts are captured from the final chunk:

const stream = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello" }],
  stream: true,
});

for await (const chunk of stream) {
  // use chunk as normal
}

Configuration

| Option | Type | Default | Description | |---|---|---|---| | apiKey | string | required | Your LLM Tracer API key (starts with lt_) | | endpoint | string | Production URL | Ingestion endpoint URL | | maxBatchSize | number | 50 | Max events per batch before auto-flush | | flushIntervalMs | number | 10000 | Auto-flush interval in milliseconds | | maxQueueSize | number | 10000 | Max events in queue before dropping oldest | | maxRetries | number | 3 | Max retry attempts for failed flushes | | retryBaseMs | number | 1000 | Base delay for exponential backoff | | sampleRate | number | 1.0 | Sampling rate (0.0-1.0). 1.0 captures everything | | capturePrompt | boolean | false | Whether to capture full prompt content | | debug | boolean | false | Enable debug logging to console | | onFlush | function | null | Callback after each flush with stats | | onError | function | null | Callback on transport errors |

API Reference

new LLMTracer(config)

Create a new tracer instance. See Configuration for options.

tracer.instrumentOpenAI(client, options?)

Instrument an OpenAI client instance. All subsequent chat.completions.create calls (streaming and non-streaming) will be tracked automatically.

  • client -- an OpenAI client instance
  • options.tags -- key-value pairs attached to every event (e.g. { env: "production" })

tracer.flush(): Promise<void>

Flush all buffered events to the backend. Call this in serverless environments before the function returns.

tracer.trace(traceId, fn): Promise<void>

Track an agentic workflow. All LLM calls within the callback are grouped under the given traceId. Use t.phase(name, fn) inside the callback to label phases.

tracer.shutdown(): Promise<void>

Flush remaining events and stop the auto-flush timer. Call this on graceful shutdown.

Reliability

The SDK is designed to never interfere with your application:

  • Never throws -- all internal errors are swallowed silently (enable debug: true for visibility)
  • Batching -- events are queued and sent in configurable batches
  • Retry with backoff -- failed flushes are retried with exponential backoff and jitter
  • Circuit breaker -- after 5 consecutive failures, stops attempting for 60 seconds
  • Queue overflow -- drops oldest events when the queue exceeds maxQueueSize

License

MIT