@haro/aitracer
v1.0.0
Published
TypeScript SDK for AITracer - AI/LLM monitoring and observability
Maintainers
Readme
@haro/aitracer
TypeScript SDK for AITracer - AI/LLM monitoring and observability platform.
Installation
npm install @haro/aitracer
# or
yarn add @haro/aitracer
# or
pnpm add @haro/aitracerQuick Start
import { AITracer, wrapOpenAI } from "@haro/aitracer";
import OpenAI from "openai";
// Initialize AITracer
const tracer = new AITracer({
apiKey: process.env.AITRACER_API_KEY,
projectId: "your-project-id", // optional
});
// Wrap your OpenAI client
const openai = wrapOpenAI(new OpenAI(), tracer);
// All API calls are now automatically logged
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});Supported Providers
OpenAI
import { AITracer, wrapOpenAI } from "@haro/aitracer";
import OpenAI from "openai";
const tracer = new AITracer({ apiKey: "your-aitracer-key" });
const openai = wrapOpenAI(new OpenAI(), tracer);
// Streaming is also supported
const stream = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}Anthropic
import { AITracer, wrapAnthropic } from "@haro/aitracer";
import Anthropic from "@anthropic-ai/sdk";
const tracer = new AITracer({ apiKey: "your-aitracer-key" });
const anthropic = wrapAnthropic(new Anthropic(), tracer);
const response = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello!" }],
});Google Gemini
import { AITracer, wrapGemini } from "@haro/aitracer";
import { GoogleGenerativeAI } from "@google/generative-ai";
const tracer = new AITracer({ apiKey: "your-aitracer-key" });
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = wrapGemini(
genAI.getGenerativeModel({ model: "gemini-1.5-flash" }),
tracer
);
const result = await model.generateContent("Hello!");
console.log(result.response.text());Manual Logging
You can also log requests manually without using wrappers:
import { AITracer } from "@haro/aitracer";
const tracer = new AITracer({ apiKey: "your-aitracer-key" });
// Log a single request
await tracer.log({
model: "gpt-4o",
provider: "openai",
inputData: { messages: [{ role: "user", content: "Hello!" }] },
outputData: { content: "Hi there!" },
inputTokens: 10,
outputTokens: 5,
latencyMs: 500,
status: "success",
});
// Log multiple requests in a batch
await tracer.logBatch([
{
model: "gpt-4o",
provider: "openai",
inputData: { messages: [{ role: "user", content: "Hello!" }] },
outputData: { content: "Hi!" },
inputTokens: 10,
outputTokens: 3,
latencyMs: 400,
},
{
model: "claude-3-5-sonnet",
provider: "anthropic",
inputData: { messages: [{ role: "user", content: "Hi!" }] },
outputData: { content: "Hello!" },
inputTokens: 8,
outputTokens: 4,
latencyMs: 350,
},
]);Configuration
const tracer = new AITracer({
// Required
apiKey: "your-aitracer-key",
// Optional
projectId: "your-project-id", // Default project for all logs
baseUrl: "https://api.aitracer.io", // Custom API URL
debug: false, // Enable debug logging
timeout: 30000, // Request timeout in ms
asyncMode: true, // Use async queue (non-blocking)
flushInterval: 5000, // Queue flush interval in ms
maxQueueSize: 100, // Max queue size before auto-flush
});Async Mode
By default, logs are queued and sent asynchronously to avoid blocking your application:
const tracer = new AITracer({
apiKey: "your-aitracer-key",
asyncMode: true, // default
flushInterval: 5000, // flush every 5 seconds
maxQueueSize: 100, // or when queue reaches 100 logs
});
// Logs are queued (non-blocking)
tracer.log({ ... });
// Manually flush the queue
await tracer.flush();
// Shutdown and flush remaining logs
await tracer.shutdown();For synchronous logging, set asyncMode: false:
const tracer = new AITracer({
apiKey: "your-aitracer-key",
asyncMode: false,
});
// Logs are sent immediately (blocking)
await tracer.log({ ... });Tracing
Add trace context to correlate related requests:
const traceId = crypto.randomUUID();
await tracer.log({
model: "gpt-4o",
provider: "openai",
inputData: { messages: [{ role: "user", content: "Hello!" }] },
outputData: { content: "Hi!" },
traceId,
spanId: crypto.randomUUID(),
sessionId: "user-session-123",
userId: "user-456",
});Custom Tags and Metadata
Add custom tags and metadata for filtering and analysis:
await tracer.log({
model: "gpt-4o",
provider: "openai",
inputData: { messages: [{ role: "user", content: "Hello!" }] },
outputData: { content: "Hi!" },
tags: {
environment: "production",
feature: "chatbot",
},
metadata: {
customerId: "cust-123",
requestSource: "mobile-app",
},
});Error Handling
import { AITracer, AITracerError } from "@haro/aitracer";
try {
await tracer.log({ ... });
} catch (error) {
if (error instanceof AITracerError) {
console.error(`AITracer error: ${error.message}`);
console.error(`Status code: ${error.statusCode}`);
}
}TypeScript Support
This SDK is written in TypeScript and includes full type definitions:
import type { LogData, LogResponse, AITracerConfig } from "@haro/aitracer";
const config: AITracerConfig = {
apiKey: "your-key",
debug: true,
};
const logData: LogData = {
model: "gpt-4o",
provider: "openai",
inputData: { messages: [] },
};Requirements
- Node.js 18.0.0 or later
- TypeScript 5.0 or later (for TypeScript users)
License
MIT
