@ariadnetrace/core
v1.0.0
Published
AriadneTrace Core: TypeScript library to trace and monitor LLM API calls
Downloads
12
Maintainers
Readme
AriadneTrace Core
A lightweight TypeScript library to monitor and debug LLM API calls in real-time. Automatically intercepts calls to OpenAI, Anthropic, Google Gemini, and other providers with zero code changes to your existing implementation.
Features
- One-line setup - Start monitoring in seconds
- Auto-detection - Automatically detects OpenAI, Anthropic, Gemini, and other LLM providers
- Complete monitoring - Track latency, tokens, costs, errors, and rate limits
- Privacy-first - Automatic masking of API keys and sensitive content
- Zero performance impact - Async interception with intelligent buffering
- Framework agnostic - Works with Express, Next.js, Fastify, or any Node.js application
Installation
npm install @ariadnetrace/coreQuick Start
import { createMonitor } from '@ariadnetrace/core';
// Initialize and intercept all LLM calls
const monitor = createMonitor('your-api-key', 'https://api.ariadnetrace.io');
monitor.interceptAll();
// That's it! All OpenAI, Anthropic, Gemini calls are now monitoredUsage with Existing Code
The library works transparently with your existing LLM client code:
import OpenAI from 'openai';
import Anthropic from '@anthropic-ai/sdk';
import { createMonitor } from '@ariadnetrace/core';
// Setup monitor once
const monitor = createMonitor('your-api-key', 'https://api.ariadnetrace.io');
monitor.interceptAll();
// Your existing code works without any changes
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
// These calls are automatically monitored
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
});
const message = await anthropic.messages.create({
model: 'claude-3-sonnet-20240229',
max_tokens: 1000,
messages: [{ role: 'user', content: 'Hello!' }]
});What's Monitored
| Category | Metrics | |----------|---------| | Performance | Latency, success rate, throughput, retry patterns | | Usage | Input/output tokens, cost estimation, rate limit headers | | Debugging | Request/response payloads, HTTP headers, error details, request IDs | | Analytics | Provider comparison, usage patterns, cost analysis |
Configuration
import { LLMMonitor } from '@ariadnetrace/core';
const monitor = new LLMMonitor({
apiKey: 'your-api-key',
endpoint: 'https://api.ariadnetrace.io',
// Privacy settings
maskSensitiveData: true,
sensitiveFields: ['password', 'secret'],
// Performance settings
bufferSize: 100,
flushInterval: 5000, // ms
// Provider-specific settings
providers: {
openai: { enabled: true, rateLimitTracking: true },
anthropic: { enabled: true, rateLimitTracking: true },
gemini: { enabled: true, rateLimitTracking: true }
}
});
monitor.interceptAll();Event Listeners
Subscribe to real-time events for debugging or custom logging:
monitor.on('request_start', (event) => {
console.log('LLM Request:', event.data.metadata.provider, event.data.metadata.model);
});
monitor.on('request_complete', (event) => {
console.log('LLM Response:', event.data.metadata.duration, 'ms');
});
monitor.on('request_error', (event) => {
console.error('LLM Error:', event.error.message);
});Statistics API
Access real-time statistics programmatically:
const stats = monitor.getStats();
console.log({
totalRequests: stats.totalRequests,
successRate: stats.successRate,
averageLatency: stats.averageLatency,
totalTokensUsed: stats.totalTokensUsed,
estimatedCost: stats.estimatedCost,
providerBreakdown: stats.providerBreakdown
});
// Get recent calls for debugging
const recentCalls = monitor.getRecentCalls(10);
// Export all data
const exportedData = monitor.exportData();Framework Integration
Express.js
import express from 'express';
import { createMonitor } from '@ariadnetrace/core';
const app = express();
const monitor = createMonitor('your-api-key', 'https://api.ariadnetrace.io');
monitor.interceptAll();
// All LLM calls in your routes are now monitored
app.post('/api/chat', async (req, res) => {
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: req.body.messages
});
res.json(response);
});Next.js
// app/api/chat/route.ts
import { createMonitor } from '@ariadnetrace/core';
import OpenAI from 'openai';
const monitor = createMonitor('your-api-key', 'https://api.ariadnetrace.io');
monitor.interceptAll();
const openai = new OpenAI();
export async function POST(req: Request) {
const { messages } = await req.json();
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages
});
return Response.json(response);
}Self-Hosted Backend
AriadneTrace Core can send logs to your own backend. The library expects these endpoints:
POST /api/v1/ingest - Single log entry
POST /api/v1/ingest/batch - Batch log entriesAuthentication via Authorization: Bearer <api-key> or X-API-Key: <api-key> header.
See AriadneTrace Backend for a complete self-hosted solution.
API Reference
createMonitor(apiKey?, endpoint?)
Factory function for quick setup.
| Parameter | Type | Description |
|-----------|------|-------------|
| apiKey | string | API key for authentication |
| endpoint | string | Backend endpoint URL |
Returns: LLMMonitor instance
LLMMonitor
Main class for monitoring LLM calls.
Methods
| Method | Description |
|--------|-------------|
| interceptAll() | Start intercepting all LLM calls |
| stop() | Stop monitoring and flush pending data |
| getStats() | Get current statistics |
| getRecentCalls(limit) | Get recent call data |
| exportData() | Export all collected data |
| on(event, callback) | Subscribe to events |
| off(event, callback?) | Unsubscribe from events |
| configure(config) | Update configuration |
| flush() | Force flush buffered data |
Events
| Event | Description |
|-------|-------------|
| request_start | Fired when an LLM request starts |
| request_complete | Fired when an LLM request completes |
| request_error | Fired when an LLM request fails |
Requirements
- Node.js >= 18
- TypeScript >= 5.0 (for TypeScript users)
Supported Providers
- OpenAI (GPT-3.5, GPT-4, etc.)
- Anthropic (Claude 2, Claude 3, etc.)
- Google Gemini
- Any provider using standard HTTP/fetch
Contributing
Contributions are welcome! Please read our Contributing Guide for details.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
MIT License - see the LICENSE file for details.
