npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@mode-7/tracelm

v0.1.0

Published

TypeScript SDK for TraceLM - LLM observability and logging

Downloads

15

Readme

@mode-7/tracelm

TypeScript SDK for TraceLM - LLM observability and logging.

Installation

npm install @mode-7/tracelm

Quick Start

import { TraceLM } from "@mode-7/tracelm";

const tracelm = new TraceLM({
  apiKey: process.env.TRACELM_API_KEY!,
  applicationId: process.env.TRACELM_APP_ID!,
});

// Simple logging
await tracelm.info("User signed up");
await tracelm.error("Something went wrong");

// LLM observability
await tracelm.agent("Chat completion", {
  model: "gpt-4",
  provider: "openai",
  messages: [
    { role: "user", content: "Hello" },
    { role: "assistant", content: "Hi!" },
  ],
});

Features

  • Simple logging - debug, info, warn, error, fatal methods
  • LLM observability - Track model usage, tokens, costs, and conversations
  • Automatic enrichment - Token estimation, cost calculation, span extraction
  • Context helpers - withUser, withTrace, withSession, withMetadata
  • Batching - Optional event batching for high-volume applications
  • TypeScript - Fully typed with inline documentation

Configuration

const tracelm = new TraceLM({
  // Required
  apiKey: "your-api-key",
  applicationId: "your-app-id",

  // Optional
  baseUrl: "https://api.tracelm.com", // Custom API URL
  environment: "production", // Environment name
  timeout: 10000, // Request timeout (ms)
  throwOnError: false, // Throw on API errors
  batching: false, // Enable event batching
  batchInterval: 1000, // Batch flush interval (ms)
  batchSize: 100, // Max batch size
});

Logging Methods

Simple Logs

await tracelm.debug("Detailed debug info");
await tracelm.info("User action completed");
await tracelm.warn("Rate limit approaching");
await tracelm.error("Operation failed");
await tracelm.fatal("Critical system failure");

With Options

await tracelm.info("User signed up", {
  user: { id: "user_123", email: "[email protected]" },
  metadata: { plan: "pro" },
  trace_id: "signup-flow-abc",
});

Error Logging

try {
  await riskyOperation();
} catch (err) {
  await tracelm.error(err); // Accepts Error objects
}

LLM Events

Basic Usage

await tracelm.agent("Chat completion", {
  model: "gpt-4",
  provider: "openai",
  messages: [
    { role: "system", content: "You are helpful." },
    { role: "user", content: "Hello" },
    { role: "assistant", content: "Hi there!" },
  ],
});

With Full Metrics

const startTime = Date.now();
const response = await openai.chat.completions.create({...});

await tracelm.agent("Chat completion", {
  model: response.model,
  provider: "openai",
  input_tokens: response.usage?.prompt_tokens,
  output_tokens: response.usage?.completion_tokens,
  latency_ms: Date.now() - startTime,
  messages: messages,
  output: response.choices[0].message.content,
});

Tool Calls (Spans Auto-Extracted)

await tracelm.agent("Tool-assisted response", {
  model: "gpt-4",
  provider: "openai",
  messages: [
    { role: "user", content: "What's the weather in London?" },
    {
      role: "assistant",
      content: null,
      tool_calls: [
        {
          id: "call_1",
          type: "function",
          function: { name: "get_weather", arguments: '{"city":"London"}' },
        },
      ],
    },
    { role: "tool", content: '{"temp":18}', tool_call_id: "call_1" },
    { role: "assistant", content: "It's 18°C in London." },
  ],
});

Context Helpers

Chain context methods to attach default values to all events:

const logger = tracelm
  .withUser({ id: "user_123" })
  .withTrace("request-abc")
  .withSession("session-xyz")
  .withMetadata({ version: "1.2.3" });

// All events include the attached context
await logger.info("User action");
await logger.agent("LLM call", { model: "gpt-4", provider: "openai" });

Batching

For high-volume applications, enable batching to reduce API calls:

const tracelm = new TraceLM({
  apiKey: "...",
  applicationId: "...",
  batching: true,
  batchSize: 50,
  batchInterval: 2000,
});

// Events are batched automatically
tracelm.info("Event 1");
tracelm.info("Event 2");

// Flush before shutdown
await tracelm.flush();

TypeScript Types

All types are exported for use in your application:

import type {
  TraceLMEvent,
  TraceLMLLM,
  LLMMessage,
  TraceLMUser,
  LogLevel,
} from "@tracelm/sdk";

Automatic Backend Computation

TraceLM automatically computes these fields server-side if not provided:

  • Token counts - Estimated via tiktoken from messages/output
  • Cost - Calculated from provider pricing tables
  • Previews - Extracted from message content
  • Spans - Auto-extracted from messages with tool calls
  • Security scans - PII detection, injection detection
  • Bot detection - From request context

This means you can send minimal payloads and let TraceLM handle the rest:

// Minimal - TraceLM computes tokens, cost, previews
await tracelm.agent("Chat", {
  model: "gpt-4",
  provider: "openai",
  messages: [...],
});

// Full - Use your own values
await tracelm.agent("Chat", {
  model: "gpt-4",
  provider: "openai",
  input_tokens: 150,
  output_tokens: 89,
  cost: 0.0045,
  latency_ms: 1200,
  messages: [...],
});

License

MIT