npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@watchlog/ai-tracer

v1.1.1

Published

`watchlog-ai-tracer` is a lightweight SDK for tracing AI interactions (like GPT-4 requests) by sending spans to the Watchlog Agent installed on your local server.

Readme

@watchlog/ai-tracer

A lightweight Node.js tracer for AI workloads, designed to capture and forward span data for monitoring and observability with Watchlog.

Features

  • Automatic trace & span management with unique IDs
  • Disk-backed queue with TTL to prevent data loss
  • Batch HTTP delivery with retry and exponential backoff
  • Kubernetes-aware endpoint detection
  • Sensitive field sanitization and output truncation

Installation

npm install @watchlog/ai-tracer

or

yarn add @watchlog/ai-tracer

Usage

// README example — OpenAI call + tracing (non-blocking, no sleep)
const WatchlogTracer = require('@watchlog/ai-tracer');

// 1) Init tracer (exit hooks خودکار: beforeExit / SIGINT / SIGTERM)
const tracer = new WatchlogTracer({
  app: 'your-app-name',                 // 🆔 required
  batchSize: 200,                       // 🔄 spans per HTTP batch
  flushOnSpanCount: 200,                // 🧺 enqueue to disk after N spans
  autoFlushInterval: 1500,              // ⏲ background flush interval (ms)
  maxQueueSize: 100000,                 // 📥 max queued spans on disk
  queueItemTTL: 10 * 60 * 1000,         // ⌛ TTL for queued spans (ms)
  // autoInstallExitHooks: true,        // ✅ default (flushes on exit)
});

// 2) Helper: Wrap any async work in a span
async function traceAsync(name, metadata, fn) {
  const spanId = tracer.startSpan(name, metadata);
  try {
    const result = await fn();
    tracer.endSpan(spanId, { output: 'ok' });
    return result;
  } catch (e) {
    tracer.endSpan(spanId, { output: String(e?.message || e) });
    throw e;
  } finally {
    tracer.send(); // non-blocking: write to disk + background flush
  }
}

// 3) Call OpenAI and capture input/output/tokens in trace
const OPENAI_API_KEY = process.env.OPENAI_API_KEY;

async function callOpenAI(prompt, { parentId } = {}) {
  const llmSpan = tracer.childSpan(parentId, 'openai.chat.completions', {
    provider: 'openai',
    model: 'gpt-4o',
  });

  try {
    const res = await fetch('https://api.openai.com/v1/chat/completions', {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${OPENAI_API_KEY}`,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        model: 'gpt-4o',
        messages: [{ role: 'user', content: prompt }],
      }),
    });

    const json = await res.json();
    const output = json?.choices?.[0]?.message?.content ?? '';
    const tokens = json?.usage?.total_tokens ?? 0;

    tracer.endSpan(llmSpan, {
      input: prompt,
      output,
      tokens,
      model: 'gpt-4o',
      provider: 'openai',
      cost: 0, // Optional : if you have cost
    });

    return output;
  } catch (e) {
    tracer.endSpan(llmSpan, { input: prompt, output: String(e?.message || e) });
    throw e;
  } finally {
    tracer.send(); // non-blocking
  }
}

// 4) Example flow (root span + child span for LLM)
async function main() {
  tracer.startTrace(); // optional but recommended (groups spans)

  const root = tracer.startSpan('handle-request', { feature: 'ai-summary' });

  // Validate input (fast op, no sleep)
  await traceAsync('validate-input', { parentId: root }, async () => {
    // ... your validation logic
  });

  // Call LLM and capture trace
  const summary = await callOpenAI('Summarize: Hello world...', { parentId: root });

  // Close root
  tracer.endSpan(root, { output: 'done' });
  tracer.send(); // non-blocking

  console.log('LLM summary:', summary);
}

main().catch(err => {
  console.error('App error:', err);
});

API

new WatchlogTracer(config)

  • config.app (string, required) — Your application name.
  • config.agentURL (string) — URL of the Watchlog agent (default: auto-detected per environment or from WATCHLOG_AGENT_URL env var).
  • config.batchSize (number) — Number of spans per HTTP batch (default: 50).
  • config.autoFlushInterval (number) — Milliseconds between automatic queue flushes (default: 1000).
  • config.maxQueueSize (number) — Maximum spans stored on disk before rotation (default: 10000).
  • config.queueItemTTL (number) — Time‑to‑live for queued spans in ms (default: 600000).
  • config.maxRetries (number) — HTTP retry attempts (default: 3).
  • config.requestTimeout (number) — Axios request timeout in ms (default: 5000).
  • config.sensitiveFields (string[]) — Field keys to strip from trace data.

Tracing Methods

  • startTrace()traceId
    Begins a new trace. Returns the generated traceId.

  • startSpan(name, metadata)spanId
    Creates a span under the current traceId.

  • childSpan(parentSpanId, name, metadata)spanId
    Alias for startSpan with a parentId.

  • endSpan(spanId, data)
    Marks a span as complete, recording timestamps, duration, tokens, cost, etc.

  • send()
    Enqueues all pending spans to disk immediately.

Agent URL Configuration

The agent URL is determined in the following priority order:

  1. Explicit config parameter: agentURL in WatchlogTracer initialization
  2. Environment variable: WATCHLOG_AGENT_URL
  3. Auto-detection:
    • Kubernetes: if running in K8s (ServiceAccount tokens, cgroup info, or DNS lookup), auto-switches to http://watchlog-node-agent.monitoring.svc.cluster.local:3774
    • Local: defaults to http://127.0.0.1:3774

Examples

// Option 1: Pass agentURL directly
const tracer = new WatchlogTracer({
  app: 'myapp',
  agentURL: 'http://my-custom-agent:3774'
});

// Option 2: Use environment variable
// export WATCHLOG_AGENT_URL=http://my-custom-agent:3774
// or
// process.env.WATCHLOG_AGENT_URL = 'http://my-custom-agent:3774';
const tracer = new WatchlogTracer({
  app: 'myapp'
});

// Option 3: Auto-detection (default behavior)
const tracer = new WatchlogTracer({
  app: 'myapp'
});

Running Tests

Use the provided test.js script under root:

node test.js

Contributing

PRs and issues welcome — please read our contributing guidelines.

License

MIT © Watchlog Monitoring