npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llmtap

v0.1.12

Published

DevTools for AI Agents - See every LLM call, trace agent workflows, track costs

Readme


Most LLM observability tools require cloud accounts, proxy configurations, or framework-specific callbacks. LLMTap takes a different approach: it runs entirely on your machine, instruments any LLM client with a single function call, and gives you a real-time dashboard at localhost.

No sign-ups. No API keys to manage the tool itself. Your prompts and responses never leave your machine.


Quick Start

1. Start LLMTap

npx llmtap

The collector starts on http://localhost:4781 and opens the dashboard.

2. Instrument your code

import OpenAI from "openai";
import { wrap } from "@llmtap/sdk";

const client = wrap(new OpenAI());

3. Use your client as normal

const res = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello, world" }],
});
// Traced automatically -- tokens, cost, latency, full request/response

Open http://localhost:4781. Traces appear in real time.


Features

Transparent instrumentation -- wrap() returns an ES Proxy. Your client behaves identically. No code changes beyond the wrap call. Streaming, tool calls, and multi-turn conversations all work.

Real-time dashboard -- Traces stream to the browser via SSE. See every LLM call as it happens with token counts, costs, latency, and full message content.

Cost tracking -- Built-in pricing for 50+ models across all major providers. Input and output costs calculated per-call. Override pricing for custom or fine-tuned models at runtime.

Trace grouping -- Group multiple LLM calls into a single trace with startTrace(). See total cost and token usage for multi-step agent pipelines.

import { wrap, startTrace } from "@llmtap/sdk";

await startTrace("research-agent", async () => {
  const plan  = await client.chat.completions.create({ model: "gpt-4o", messages: [...] });
  const draft = await client.chat.completions.create({ model: "gpt-4o-mini", messages: [...] });
});
// Both calls grouped under "research-agent" in the dashboard

Streaming support -- Full support for streaming responses across all providers. Token counts and costs are captured after the stream completes. Your application sees the exact same stream.

OpenTelemetry export -- Export traces in OTLP format following GenAI Semantic Conventions. Forward to Datadog, Grafana, Jaeger, or any OTLP-compatible backend.

OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 npx llmtap

Privacy by default -- Everything runs locally. No data is sent anywhere unless you explicitly configure OTLP forwarding. Content capture can be disabled entirely.


Supported Providers

LLMTap works with any provider that uses the OpenAI, Anthropic, or Google Gemini SDK format.

| Provider | SDK | Detection | |----------|-----|-----------| | OpenAI | openai | Automatic | | Anthropic | @anthropic-ai/sdk | Automatic | | Google Gemini | @google/generative-ai | Automatic | | DeepSeek | openai (compatible) | Auto via base URL | | Groq | openai (compatible) | Auto via base URL | | Together | openai (compatible) | Auto via base URL | | Fireworks | openai (compatible) | Auto via base URL | | OpenRouter | openai (compatible) | Auto via base URL | | xAI (Grok) | openai (compatible) | Auto via base URL | | Ollama | openai (compatible) | Auto via base URL | | Vercel AI SDK | ai | Via wrapVercelAI() | | Any OpenAI-compatible | openai | Manual or auto |

// OpenAI
const openai = wrap(new OpenAI());

// Anthropic
import Anthropic from "@anthropic-ai/sdk";
const claude = wrap(new Anthropic());

// Google Gemini
import { GoogleGenerativeAI } from "@google/generative-ai";
const gemini = wrap(new GoogleGenerativeAI(process.env.GOOGLE_API_KEY));

// DeepSeek, Groq, or any OpenAI-compatible provider
const deepseek = wrap(new OpenAI({
  baseURL: "https://api.deepseek.com",
  apiKey: process.env.DEEPSEEK_API_KEY,
}));

API Reference

wrap(client, options?)

Wraps an LLM client to trace all API calls. Returns a proxy -- the client works identically.

const client = wrap(new OpenAI());
const client = wrap(new OpenAI(), { provider: "deepseek", tags: { env: "staging" } });

| Option | Type | Description | |--------|------|-------------| | provider | string | Override auto-detected provider name | | tags | Record<string, string> | Custom tags attached to every span |

startTrace(name, fn, options?)

Groups multiple LLM calls under a single trace.

const result = await startTrace("my-pipeline", async () => {
  const step1 = await client.chat.completions.create({ ... });
  const step2 = await client.chat.completions.create({ ... });
  return step2;
}, { sessionId: "user-123", tags: { workflow: "summarize" } });

| Option | Type | Description | |--------|------|-------------| | sessionId | string | Group traces into a session | | tags | Record<string, string> | Custom tags on the trace |

init(config)

Configure the SDK globally. All options can also be set via environment variables.

import { init } from "@llmtap/sdk";

init({
  collectorUrl: "http://localhost:4781",
  captureContent: true,
  enabled: true,
  debug: false,
  sessionId: "my-session",
  onError: (err, ctx) => console.warn("LLMTap:", err.message),
});

| Config | Env Var | Default | Description | |--------|---------|---------|-------------| | collectorUrl | LLMTAP_COLLECTOR_URL | http://localhost:4781 | Collector endpoint | | captureContent | LLMTAP_CAPTURE_CONTENT | true | Capture message content | | enabled | LLMTAP_ENABLED | true | Enable/disable tracing | | debug | LLMTAP_DEBUG | false | Debug logging | | sessionId | LLMTAP_SESSION_ID | -- | Session grouping |

wrapVercelAI(ai)

Wraps the Vercel AI SDK for framework-level tracing across any underlying provider.

shutdown()

Flushes all buffered spans and shuts down the SDK. Call before process exit in serverless environments.

import { shutdown } from "@llmtap/sdk";
await shutdown();

Architecture

Your Application
  |
  |  wrap(client) -- ES Proxy intercepts LLM calls
  |
  v
@llmtap/sdk ───────────> @llmtap/collector ───────────> @llmtap/dashboard
  Proxy-based               Fastify + SQLite               React + Vite
  instrumentation            REST API + SSE                 Real-time UI
  |                          |
  |  Batched HTTP POST       |  SSE push on new spans       Connects via SSE
  |  to /v1/spans            |  GET /v1/traces, /v1/stats   and REST API
  v                          v
                      ┌─────────────┐
                      │  SQLite DB  │       Optional: OTLP export to
                      │  (WAL mode) │ ───>  Datadog, Grafana, Jaeger
                      └─────────────┘

| Package | Description | |---------|-------------| | llmtap | CLI entry point -- npx llmtap starts collector + dashboard | | @llmtap/sdk | ES Proxy-based instrumentation for LLM clients | | @llmtap/collector | Fastify server, SQLite storage, SSE, REST API | | @llmtap/dashboard | React + Vite + Tailwind SPA with real-time updates | | @llmtap/shared | Types, constants, pricing data, OTLP converter |


CLI

npx llmtap                     # Start collector + dashboard
npx llmtap --demo              # Start with sample data
npx llmtap --port 8080         # Custom port
npx llmtap --retention 7d      # Auto-delete old data
npx llmtap --host 0.0.0.0      # Expose to network
npx llmtap status              # Check stored spans and DB location
npx llmtap doctor              # Diagnose setup and empty-state issues
npx llmtap backup              # Create a portable SQLite backup
npx llmtap export -f json      # Export traces as JSON
npx llmtap import traces.json  # Re-import exported traces
npx llmtap restore backup.db   # Restore from a backup (collector must be stopped)

Troubleshooting

If the dashboard starts but stays empty, run:

npx llmtap doctor

That checks:

  • collector health
  • local database path and permissions
  • whether the current project has @llmtap/sdk installed
  • whether traces have actually been captured yet

For local data portability:

npx llmtap backup
npx llmtap import llmtap-export.json
npx llmtap restore llmtap-backup.db

Comparison

| | LLMTap | LangSmith | Helicone | Langfuse | |---|---|---|---|---| | Setup | npx + 2 lines | SDK + cloud account | Proxy + cloud | SDK + self-host or cloud | | Data location | Your machine | Their cloud | Their cloud | Your infra or theirs | | Pricing | Free, no limits | $39/seat/mo | $79/mo | Free (self-host) | | Instrumentation | wrap(client) | Framework-specific | Proxy gateway | SDK callbacks |

LLMTap is a developer tool -- fast to start, private by default, zero friction. Use it during development and prototyping. When you need production infrastructure, export your traces via OTLP to the platform of your choice.


Development

git clone https://github.com/DivyaanshuXD/LLMTap.git
cd llmtap
pnpm install
pnpm build        # Build all packages
pnpm test         # Run all tests (Vitest)

TypeScript monorepo with pnpm workspaces and Turborepo. Packages build with tsup, the dashboard builds with Vite.

Contributing

Contributions are welcome.

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/my-feature
  3. Make your changes and add tests
  4. Run pnpm build && pnpm test to verify
  5. Open a pull request

Please open an issue first for large changes so we can discuss the approach.

License

MIT