npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

vottur

v0.3.0

Published

Automatic JSONL logging for LLM APIs. Drop-in wrappers for OpenAI SDK, OpenRouter SDK, or raw fetch calls.

Readme

Vottur

zero dependencies npm

Automatic JSONL logging for LLM APIs. Wrap your OpenAI or OpenRouter client, and every request gets logged to a file. No code changes needed.

Works with OpenAI, OpenRouter, Azure, Ollama, or anything OpenAI-compatible. "Vottur" means "witness" in Icelandic.

How it works

Vottur wraps your SDK client and intercepts every API call. When you call chat.completions.create(), it:

  1. Captures the request (model, messages, parameters)
  2. Passes it through to the real SDK
  3. Captures the response (content, tokens, latency)
  4. Logs everything to a JSONL file (fire-and-forget, never awaited)

Your code stays exactly the same. The response object is unchanged. Vottur just watches and logs.

Quick Start

npm install vottur openai
import { createClient } from 'vottur';

const client = createClient({
  apiKey: process.env.OPENAI_API_KEY,
});

const response = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Every request is logged to .vottur/logs.jsonl.

Three Usage Modes

1. OpenAI SDK Mode (Default)

Drop-in replacement for the OpenAI SDK:

import { createClient } from 'vottur';

const client = createClient({
  apiKey: process.env.OPENAI_API_KEY,
});

// Same API as OpenAI SDK
const response = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: 'Hello!' }],
});

Works with any OpenAI-compatible endpoint:

// OpenRouter via OpenAI SDK
const client = createClient({
  apiKey: process.env.OPENROUTER_API_KEY,
  baseURL: 'https://openrouter.ai/api/v1',
});

// Azure OpenAI
const client = createClient({
  apiKey: process.env.AZURE_OPENAI_API_KEY,
  baseURL: 'https://your-resource.openai.azure.com/openai/deployments/gpt-5.2',
});

// Local (Ollama, LM Studio)
const client = createClient({
  apiKey: 'ollama',
  baseURL: 'http://localhost:11434/v1',
});

2. Native OpenRouter SDK Mode

For the @openrouter/sdk with native features:

npm install vottur @openrouter/sdk
import { createClient } from 'vottur/openrouter';

const client = createClient({
  apiKey: process.env.OPENROUTER_API_KEY,
  siteUrl: 'https://myapp.com',
  siteName: 'My App',
});

const response = await client.chat.send({
  model: 'openai/gpt-5.2',
  messages: [{ role: 'user', content: 'Hello!' }],
});

3. Fetch Mode (Universal)

Direct fetch wrapper for any LLM API:

import { createVottur } from 'vottur/fetch';

const vottur = createVottur({
  logPath: '.vottur/logs.jsonl',
  sessionId: 'my-session',
});

const response = await vottur('https://api.openai.com/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'gpt-5.2',
    messages: [{ role: 'user', content: 'Hello!' }],
  }),
});

Or use the default instance:

import { vottur } from 'vottur/fetch';

// Uses default config
const response = await vottur('https://api.openai.com/v1/chat/completions', init);

Fetch mode supports streaming:

const response = await vottur('https://api.openai.com/v1/chat/completions', {
  method: 'POST',
  headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' },
  body: JSON.stringify({
    model: 'gpt-5.2',
    messages: [{ role: 'user', content: 'Tell me a story' }],
    stream: true,
  }),
});

// Stream is automatically logged when consumed
const reader = response.body?.getReader();
// ... read chunks

Fetch mode also exposes trace_id for agent hierarchy tracking:

const response = await vottur('https://api.openai.com/v1/chat/completions', {
  method: 'POST',
  headers: { ... },
  body: JSON.stringify({
    model: 'gpt-5.2',
    messages: [...],
    _name: 'orchestrator',
  }),
});

// Get trace_id to pass to child agents
const traceId = (response as any).trace_id;

// Child request links back to parent
await vottur('...', {
  body: JSON.stringify({
    ...
    _spawnedBy: traceId,
  }),
});

Streaming

Works identically to the underlying SDK:

const stream = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Logs are written after the stream completes, with accumulated content and token usage.

Tool Calls

const response = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: 'Weather in Tokyo?' }],
  tools: [{
    type: 'function',
    function: {
      name: 'get_weather',
      parameters: {
        type: 'object',
        properties: { location: { type: 'string' } },
        required: ['location'],
      },
    },
  }],
});

Configuration

createClient({
  // OpenAI SDK options
  apiKey: string,
  baseURL?: string,

  // Vottur options
  logPath?: string,         // Default: .vottur/logs.jsonl
  sessionId?: string,       // Group related requests
  disabled?: boolean,       // Disable logging entirely
  onLog?: (entry) => void,  // Callback for each log entry
  logRawData?: boolean,     // Include raw request/response
  logRawChunks?: boolean,   // Include raw streaming chunks
});

Session Management

A unique session ID (sess_<UUID>) is generated once when you call createClient(). All requests made with that client share the same session - perfect for grouping an entire agentic workflow.

// Each createClient() call = new session
const client = createClient({ apiKey: '...' });
// All requests below share the same session
await client.chat.completions.create({ ... });  // sess_abc...
await client.chat.completions.create({ ... });  // sess_abc... (same)
await client.chat.completions.create({ ... });  // sess_abc... (same)

For multi-agent systems, pass the same client to all agents in a single run. They'll all share the same session, making it easy to trace the entire workflow:

async function runWorkflow(task: string) {
  const client = createClient({ apiKey: '...' });  // One session for the whole run

  // All agents share the same client = same session
  const planner = new PlannerAgent(client);
  const executor = new ExecutorAgent(client);
  const reviewer = new ReviewerAgent(client);

  const plan = await planner.plan(task);      // sess_abc...
  const result = await executor.run(plan);    // sess_abc... (same session)
  await reviewer.review(result);              // sess_abc... (same session)
}

await runWorkflow("task 1");  // sess_abc... (all agents)
await runWorkflow("task 2");  // sess_def... (new run = new session)

To manually control sessions:

client._vottur.newSession();  // Start a new session mid-run
client._vottur.setSessionId('custom-id');  // Use your own ID
client._vottur.getSessionId();  // Get current session ID

Utility Methods

client._vottur.getSessionId();     // Current session ID
client._vottur.newSession();       // Start new session, returns new ID
client._vottur.setSessionId(id);   // Set custom session ID
client._vottur.flush();            // Flush pending writes to disk
client._vottur.close();            // Close and flush
client._vottur.getLogPath();       // Get log file path
client._vottur.getLastLogEntry();  // Get most recent log entry

Request Labeling

Use _name to label requests for easier log analysis. This helps identify which part of your system made each call:

// Label requests to identify their purpose
await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [...],
  _name: 'planning-step',  // Shows up in logs as "name": "planning-step"
});

await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [...],
  _name: 'code-review',
});

await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [...],
  _name: 'final-summary',
});

The _name field is stripped before sending to the API - it's only used for logging.

Agent Hierarchy Tracking

For multi-agent systems, track parent-child relationships with _spawnedBy. Vottur exposes trace_id on every response, available immediately when the response/stream is returned (no need to wait for completion):

// Root orchestrator - no parent
const orchestratorResponse = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: 'Plan the task' }],
  _name: 'orchestrator',
});

// Vottur exposes trace_id on the response!
const orchestratorTraceId = (orchestratorResponse as any).trace_id;

// Child agent - spawned by orchestrator
const workerResponse = await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: 'Execute subtask' }],
  _name: 'worker',
  _spawnedBy: orchestratorTraceId,  // Links to parent
});

// Get worker's trace_id for further children
const workerTraceId = (workerResponse as any).trace_id;

// Grandchild agent - spawned by worker
await client.chat.completions.create({
  model: 'gpt-5.2',
  messages: [{ role: 'user', content: 'Write code' }],
  _name: 'code-writer',
  _spawnedBy: workerTraceId,  // Links to worker
});

This creates a hierarchy in your logs where spawned_by matches trace_id:

{"trace_id": "tr_abc...", "name": "orchestrator", "spawned_by": null}
{"trace_id": "tr_def...", "name": "worker", "spawned_by": "tr_abc..."}
{"trace_id": "tr_ghi...", "name": "code-writer", "spawned_by": "tr_def..."}

Works with any depth of nesting - each agent just passes its response.trace_id to children. Combined with session_id, you get a complete picture: session groups the entire workflow, spawned_by shows the call hierarchy within it.

Size Warnings

Vottur automatically detects large content and adds warnings to log entries:

  • Message > 50KB: "Message 0 content is large (52KB)"
  • Total input > 100KB: "Total input is large (128KB)"
  • Output > 50KB: "Output content is large (64KB)"

Warnings appear in the warnings array in log entries. This helps identify requests that might be consuming excessive tokens or hitting context limits.

Log Format

Each line in the JSONL file:

{
  "trace_id": "tr_a1b2c3d4-...",
  "session_id": "sess_e5f6g7h8-...",
  "timestamp": "2025-12-16T23:15:17.858Z",
  "latency_ms": 677,
  "model": "gpt-5.2",
  "name": "planning-step",
  "spawned_by": "tr_parent-...",
  "input": {
    "messages": [{ "role": "user", "content": "Hello!" }],
    "tools": [...],
    "temperature": 0.7
  },
  "output": {
    "content": "Hi there!",
    "tool_calls": [...],
    "finish_reason": "stop"
  },
  "usage": {
    "prompt_tokens": 14,
    "completion_tokens": 8,
    "total_tokens": 22,
    "cost": 0.0001
  },
  "streaming": false,
  "warnings": ["Message 0 content is large (52KB)"]
}

Note: name, spawned_by, and warnings are optional fields that appear only when set.

Provider-specific fields (like reasoning_details, cost) are preserved.

Guarantees

Vottur is a transparent proxy. It wraps SDK/fetch calls without modifying anything:

Your Code                    Vottur                           API
    │                          │                                │
    │  request                 │                                │
    │ ────────────────────────►│                                │
    │                          │  request (unchanged)           │
    │                          │───────────────────────────────►│
    │                          │                                │
    │                          │◄───────────────────────────────│
    │                          │  response                      │
    │                          │                                │
    │                          │  [logs to JSONL in background] │
    │                          │                                │
    │◄─────────────────────────│                                │
    │  response (unchanged)    │                                │
  • Requests pass through unchanged to the underlying SDK/fetch
  • Responses return unchanged to your code
  • Logging is fire-and-forget (never awaited, zero latency impact)
  • All provider-specific fields are preserved
  • Streaming chunks flow through unchanged, captured for logging
  • Errors are logged and re-thrown unchanged

CLI

npx vottur init      # Set up .vottur/ in your project
npx vottur analyze   # Show logs location