@watchlog/ai-tracer
v1.1.1
Published
`watchlog-ai-tracer` is a lightweight SDK for tracing AI interactions (like GPT-4 requests) by sending spans to the Watchlog Agent installed on your local server.
Keywords
Readme
@watchlog/ai-tracer
A lightweight Node.js tracer for AI workloads, designed to capture and forward span data for monitoring and observability with Watchlog.
Features
- Automatic trace & span management with unique IDs
- Disk-backed queue with TTL to prevent data loss
- Batch HTTP delivery with retry and exponential backoff
- Kubernetes-aware endpoint detection
- Sensitive field sanitization and output truncation
Installation
npm install @watchlog/ai-traceror
yarn add @watchlog/ai-tracerUsage
// README example — OpenAI call + tracing (non-blocking, no sleep)
const WatchlogTracer = require('@watchlog/ai-tracer');
// 1) Init tracer (exit hooks خودکار: beforeExit / SIGINT / SIGTERM)
const tracer = new WatchlogTracer({
app: 'your-app-name', // 🆔 required
batchSize: 200, // 🔄 spans per HTTP batch
flushOnSpanCount: 200, // 🧺 enqueue to disk after N spans
autoFlushInterval: 1500, // ⏲ background flush interval (ms)
maxQueueSize: 100000, // 📥 max queued spans on disk
queueItemTTL: 10 * 60 * 1000, // ⌛ TTL for queued spans (ms)
// autoInstallExitHooks: true, // ✅ default (flushes on exit)
});
// 2) Helper: Wrap any async work in a span
async function traceAsync(name, metadata, fn) {
const spanId = tracer.startSpan(name, metadata);
try {
const result = await fn();
tracer.endSpan(spanId, { output: 'ok' });
return result;
} catch (e) {
tracer.endSpan(spanId, { output: String(e?.message || e) });
throw e;
} finally {
tracer.send(); // non-blocking: write to disk + background flush
}
}
// 3) Call OpenAI and capture input/output/tokens in trace
const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
async function callOpenAI(prompt, { parentId } = {}) {
const llmSpan = tracer.childSpan(parentId, 'openai.chat.completions', {
provider: 'openai',
model: 'gpt-4o',
});
try {
const res = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': `Bearer ${OPENAI_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-4o',
messages: [{ role: 'user', content: prompt }],
}),
});
const json = await res.json();
const output = json?.choices?.[0]?.message?.content ?? '';
const tokens = json?.usage?.total_tokens ?? 0;
tracer.endSpan(llmSpan, {
input: prompt,
output,
tokens,
model: 'gpt-4o',
provider: 'openai',
cost: 0, // Optional : if you have cost
});
return output;
} catch (e) {
tracer.endSpan(llmSpan, { input: prompt, output: String(e?.message || e) });
throw e;
} finally {
tracer.send(); // non-blocking
}
}
// 4) Example flow (root span + child span for LLM)
async function main() {
tracer.startTrace(); // optional but recommended (groups spans)
const root = tracer.startSpan('handle-request', { feature: 'ai-summary' });
// Validate input (fast op, no sleep)
await traceAsync('validate-input', { parentId: root }, async () => {
// ... your validation logic
});
// Call LLM and capture trace
const summary = await callOpenAI('Summarize: Hello world...', { parentId: root });
// Close root
tracer.endSpan(root, { output: 'done' });
tracer.send(); // non-blocking
console.log('LLM summary:', summary);
}
main().catch(err => {
console.error('App error:', err);
});
API
new WatchlogTracer(config)
config.app(string, required) — Your application name.config.agentURL(string) — URL of the Watchlog agent (default: auto-detected per environment or fromWATCHLOG_AGENT_URLenv var).config.batchSize(number) — Number of spans per HTTP batch (default:50).config.autoFlushInterval(number) — Milliseconds between automatic queue flushes (default:1000).config.maxQueueSize(number) — Maximum spans stored on disk before rotation (default:10000).config.queueItemTTL(number) — Time‑to‑live for queued spans in ms (default:600000).config.maxRetries(number) — HTTP retry attempts (default:3).config.requestTimeout(number) — Axios request timeout in ms (default:5000).config.sensitiveFields(string[]) — Field keys to strip from trace data.
Tracing Methods
startTrace()→traceId
Begins a new trace. Returns the generatedtraceId.startSpan(name, metadata)→spanId
Creates a span under the currenttraceId.childSpan(parentSpanId, name, metadata)→spanId
Alias forstartSpanwith aparentId.endSpan(spanId, data)
Marks a span as complete, recording timestamps, duration, tokens, cost, etc.send()
Enqueues all pending spans to disk immediately.
Agent URL Configuration
The agent URL is determined in the following priority order:
- Explicit config parameter:
agentURLinWatchlogTracerinitialization - Environment variable:
WATCHLOG_AGENT_URL - Auto-detection:
- Kubernetes: if running in K8s (ServiceAccount tokens, cgroup info, or DNS lookup), auto-switches to
http://watchlog-node-agent.monitoring.svc.cluster.local:3774 - Local: defaults to
http://127.0.0.1:3774
- Kubernetes: if running in K8s (ServiceAccount tokens, cgroup info, or DNS lookup), auto-switches to
Examples
// Option 1: Pass agentURL directly
const tracer = new WatchlogTracer({
app: 'myapp',
agentURL: 'http://my-custom-agent:3774'
});
// Option 2: Use environment variable
// export WATCHLOG_AGENT_URL=http://my-custom-agent:3774
// or
// process.env.WATCHLOG_AGENT_URL = 'http://my-custom-agent:3774';
const tracer = new WatchlogTracer({
app: 'myapp'
});
// Option 3: Auto-detection (default behavior)
const tracer = new WatchlogTracer({
app: 'myapp'
});Running Tests
Use the provided test.js script under root:
node test.jsContributing
PRs and issues welcome — please read our contributing guidelines.
License
MIT © Watchlog Monitoring
