@with-orbit/sdk
v0.1.5
Published
Orbit - AI Cost Analytics SDK. Track, monitor, and optimize your AI spend.
Maintainers
Readme
Orbit SDK
Track, monitor, and optimize your AI spend across OpenAI, Anthropic, and other LLM providers.
Installation
npm install @with-orbit/sdk
# or
yarn add @with-orbit/sdk
# or
pnpm add @with-orbit/sdkQuick Start
1. Get your API key
Sign up at Orbit and create an API key.
2. Initialize the SDK
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({
apiKey: 'orb_live_xxxxxxxxxxxxxxxxxxxxxxxx',
defaultFeature: 'my-app', // Optional: default feature for all events
});3. Track your LLM calls
Option A: Automatic tracking (Recommended)
Wrap your OpenAI or Anthropic client for automatic tracking:
import OpenAI from 'openai';
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const openai = orbit.wrapOpenAI(new OpenAI(), {
feature: 'chat-assistant', // Attribute all calls to this feature
});
// All API calls are now automatically tracked!
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello, world!' }],
});Works with Anthropic too:
import Anthropic from '@anthropic-ai/sdk';
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const anthropic = orbit.wrapAnthropic(new Anthropic(), {
feature: 'document-analysis',
});
const message = await anthropic.messages.create({
model: 'claude-3-opus-20240229',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Analyze this document...' }],
});Works with Google Gemini (new @google/genai SDK):
import { GoogleGenAI } from '@google/genai';
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const ai = orbit.wrapGoogle(new GoogleGenAI({ apiKey: 'your-gemini-key' }), {
feature: 'chat',
});
const response = await ai.models.generateContent({
model: 'gemini-2.0-flash',
contents: 'Hello, how are you?',
});Works with Google Gemini (legacy @google/generative-ai SDK):
import { GoogleGenerativeAI } from '@google/generative-ai';
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
const genAI = orbit.wrapGoogleLegacy(new GoogleGenerativeAI('your-gemini-key'), {
feature: 'chat',
});
const model = genAI.getGenerativeModel({ model: 'gemini-2.0-flash' });
const result = await model.generateContent('Hello, how are you?');Option B: Manual tracking
For other providers or custom implementations:
import { Orbit } from '@with-orbit/sdk';
const orbit = new Orbit({ apiKey: 'orb_live_xxx' });
// Track a successful request
await orbit.track({
model: 'gpt-4o',
input_tokens: 150,
output_tokens: 50,
latency_ms: 1234,
feature: 'summarization',
environment: 'production',
});
// Track an error
await orbit.trackError('gpt-4o', 'rate_limit_exceeded', 'Rate limit exceeded', {
feature: 'chat-assistant',
input_tokens: 150,
});Configuration
const orbit = new Orbit({
// Required
apiKey: 'orb_live_xxx',
// Optional
baseUrl: 'https://app.withorbit.io/api/v1', // Custom API endpoint
defaultFeature: 'my-app', // Default feature name
defaultEnvironment: 'production', // 'production' | 'staging' | 'development'
debug: false, // Enable debug logging
// Batching (for high-volume applications)
batchEvents: true, // Batch events before sending
batchSize: 10, // Max events per batch
batchInterval: 5000, // Max ms before sending batch
// Reliability
retry: true, // Retry failed requests
maxRetries: 3, // Max retry attempts
});Event Properties
| Property | Type | Required | Description |
|----------|------|----------|-------------|
| model | string | Yes | Model name (e.g., 'gpt-4o', 'claude-3-opus') |
| input_tokens | number | Yes | Number of input tokens |
| output_tokens | number | Yes | Number of output tokens |
| provider | string | No | Provider name (auto-detected if not provided) |
| latency_ms | number | No | Request latency in milliseconds |
| feature | string | No | Feature name for attribution |
| environment | string | No | Environment ('production', 'staging', 'development') |
| status | string | No | Request status ('success', 'error', 'timeout') |
| error_type | string | No | Error type if status is 'error' |
| error_message | string | No | Error message if status is 'error' |
| user_id | string | No | Your application's user ID |
| session_id | string | No | Session ID for grouping requests |
| request_id | string | No | Unique request ID for tracing |
| task_id | string | No | Task ID for grouping related LLM calls in agentic workflows |
| customer_id | string | No | Customer ID for billing attribution |
| metadata | object | No | Additional key-value metadata |
Feature Attribution
Features are Orbit's killer feature - they let you see exactly which parts of your application are consuming AI resources:
// Track different features
await orbit.track({
model: 'gpt-4o',
input_tokens: 100,
output_tokens: 50,
feature: 'chat-assistant', // <-- Attribute to chat feature
});
await orbit.track({
model: 'gpt-4o',
input_tokens: 500,
output_tokens: 200,
feature: 'document-analysis', // <-- Attribute to doc analysis
});Then in the Orbit dashboard, you'll see:
- Cost breakdown by feature
- Request volume by feature
- Error rates by feature
- And more!
Agentic Task Tracking
Track multi-step agentic workflows by grouping related LLM calls under a task:
// All calls with the same task_id are grouped together
const openai = orbit.wrapOpenAI(new OpenAI(), {
feature: 'ai-agent',
task_id: 'task_abc123', // Group all LLM calls for this task
customer_id: 'cust_xyz789', // Attribute costs to this customer
});
// Step 1: Plan
await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Plan how to analyze this data...' }],
});
// Step 2: Execute
await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Now execute the analysis...' }],
});
// Both calls are tracked under task_abc123In the Orbit dashboard, you can then see:
- All LLM calls grouped by task
- Total cost per task
- Customer-level cost attribution
Environments
Track usage across different environments:
const orbit = new Orbit({
apiKey: 'orb_live_xxx',
defaultEnvironment: process.env.NODE_ENV === 'production' ? 'production' : 'development',
});Graceful Shutdown
For serverless or short-lived processes, flush events before exit:
// Before your process exits
await orbit.shutdown();TypeScript Support
Full TypeScript support with exported types:
import { Orbit, OrbitEvent, OrbitConfig } from '@with-orbit/sdk';
const config: OrbitConfig = {
apiKey: 'orb_live_xxx',
};
const event: OrbitEvent = {
model: 'gpt-4o',
input_tokens: 100,
output_tokens: 50,
};License
MIT
