@bearlumen/node-sdk
v0.2.0
Published
Official Node.js SDK for Bear Lumen - AI cost intelligence and usage tracking for AI applications
Maintainers
Readme
@bearlumen/node-sdk
Official Node.js SDK for Bear Lumen -- AI cost intelligence for your application.
Track AI provider usage automatically. Know what your AI features cost.
Installation
npm install @bearlumen/node-sdk
# or
yarn add @bearlumen/node-sdkQuick Start
import { BearLumen, Provider } from '@bearlumen/node-sdk';
const bear = new BearLumen({
apiKey: process.env.BEAR_LUMEN_API_KEY!,
});
// Track an OpenAI response
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
const result = bear.track(response);
console.log(result.model); // 'gpt-4o'
console.log(result.inputTokens); // 12
// Flush before process exit
await bear.shutdown();Features
- Auto-detection: Wraps responses from OpenAI, Anthropic, AWS Bedrock, Google Gemini, Mistral, Ollama
- Streaming support: Transparent wrapping -- chunks pass through unchanged
- Background batching: Events queued in memory, flushed in batches (Segment-style)
- Non-LLM tracking: TTS (characters), image generation (count), GPU compute (seconds)
- Cost queries:
bear.costs.byModel(),bear.costs.byProvider(),bear.costs.byFeature()
Usage Examples
OpenAI (non-streaming)
import OpenAI from 'openai';
import { BearLumen } from '@bearlumen/node-sdk';
const openai = new OpenAI();
const bear = new BearLumen({ apiKey: process.env.BEAR_LUMEN_API_KEY! });
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Explain quantum computing' }],
});
const result = bear.track(response);
// result.model -> 'gpt-4o'
// result.provider -> 'openai'
// result.inputTokens -> 12
// result.outputTokens -> 85AWS Bedrock (streaming)
import { BedrockRuntimeClient, ConverseStreamCommand } from '@aws-sdk/client-bedrock-runtime';
import { BearLumen } from '@bearlumen/node-sdk';
const bedrock = new BedrockRuntimeClient({ region: 'us-east-1' });
const bear = new BearLumen({ apiKey: process.env.BEAR_LUMEN_API_KEY! });
const command = new ConverseStreamCommand({
modelId: 'anthropic.claude-3-haiku-20240307-v1:0',
messages: [{ role: 'user', content: [{ text: 'Hello' }] }],
});
const response = await bedrock.send(command);
// Wrap the stream -- chunks pass through unchanged
const stream = bear.track(response.stream!, {
model: 'anthropic.claude-3-haiku',
});
for await (const chunk of stream) {
// Process chunks as usual
if (chunk.contentBlockDelta?.delta?.text) {
process.stdout.write(chunk.contentBlockDelta.delta.text);
}
}
// Usage reported after stream completes
const result = await stream.result;
console.log(result.inputTokens, result.outputTokens);Anthropic (non-streaming)
import Anthropic from '@anthropic-ai/sdk';
import { BearLumen } from '@bearlumen/node-sdk';
const anthropic = new Anthropic();
const bear = new BearLumen({ apiKey: process.env.BEAR_LUMEN_API_KEY! });
const response = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Hello' }],
});
const result = bear.track(response);
// result.model -> 'claude-3-5-sonnet-20241022'
// result.provider -> 'anthropic'Google Gemini
import { GoogleGenerativeAI } from '@google/generative-ai';
import { BearLumen } from '@bearlumen/node-sdk';
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!);
const bear = new BearLumen({ apiKey: process.env.BEAR_LUMEN_API_KEY! });
const model = genAI.getGenerativeModel({ model: 'gemini-1.5-flash' });
const response = await model.generateContent('Hello');
const result = bear.track(response.response, { model: 'gemini-1.5-flash' });
// result.provider -> 'gemini'Non-LLM: Text-to-Speech
import { BearLumen, Provider } from '@bearlumen/node-sdk';
const bear = new BearLumen({ apiKey: process.env.BEAR_LUMEN_API_KEY! });
// After calling your TTS provider
bear.track(null, {
model: 'eleven_multilingual_v2',
provider: Provider.ELEVENLABS,
feature: 'narration',
units: { characters: 1500 },
});Non-LLM: Image Generation
import { BearLumen, Provider } from '@bearlumen/node-sdk';
const bear = new BearLumen({ apiKey: process.env.BEAR_LUMEN_API_KEY! });
bear.track(null, {
model: 'dall-e-3',
provider: Provider.OPENAI,
feature: 'image-generation',
units: { generations: 1 },
});Supported Providers
| Provider | Constant | Detection | Streaming |
|----------|----------|-----------|-----------|
| OpenAI | Provider.OPENAI | Auto | Yes |
| Anthropic | Provider.ANTHROPIC | Auto | Yes |
| AWS Bedrock | Provider.BEDROCK | Auto (model hint required) | Yes |
| Google Gemini | Provider.GEMINI | Auto | Yes |
| Mistral (native SDK) | Provider.MISTRAL | Auto | Yes |
| Ollama (native API) | Provider.OLLAMA | Auto | - |
| Together AI | Provider.TOGETHER | Auto (OpenAI-compatible) | Yes |
| Groq | Provider.GROQ | Auto (OpenAI-compatible) | Yes |
| Fireworks AI | Provider.FIREWORKS | Auto (OpenAI-compatible) | Yes |
| OpenRouter | Provider.OPENROUTER | Auto (OpenAI-compatible) | Yes |
| ElevenLabs | Provider.ELEVENLABS | Manual (hints) | - |
| Deepgram | Provider.DEEPGRAM | Manual (hints) | - |
| Stability AI | Provider.STABILITY | Manual (hints) | - |
| Replicate | Provider.REPLICATE | Manual (hints) | - |
| MiniMax | Provider.MINIMAX | Manual (hints) | - |
Cost Queries
// Cost breakdown by model
const byModel = await bear.costs.byModel({
startDate: '2025-01-01',
endDate: '2025-01-31',
});
for (const item of byModel.items) {
console.log(`${item.attributionValue}: $${item.totalCost} (${item.percentOfTotal}%)`);
}
// Cost breakdown by provider
const byProvider = await bear.costs.byProvider({
startDate: '2025-01-01',
endDate: '2025-01-31',
});
// Cost breakdown by feature
const byFeature = await bear.costs.byFeature({
startDate: '2025-01-01',
endDate: '2025-01-31',
});Configuration
const bear = new BearLumen({
apiKey: 'bl_live_...', // Required
baseUrl: 'https://api.bearlumen.com', // Optional (default)
maxBatchSize: 20, // Optional: events per batch (default: 20)
flushIntervalMs: 10000, // Optional: flush interval in ms (default: 10000)
onError: (error) => { // Optional: error handler for background flush failures
console.error('Bear Lumen flush error:', error);
},
});Graceful Shutdown
// In serverless functions or before process exit
await bear.shutdown();
// Or for manual flush without stopping the timer
await bear.flush();API Reference
bear.track(response, options?)
Track an AI provider response. Auto-detects the provider and extracts usage metadata.
Returns TrackResult with { eventId, model, provider, inputTokens, outputTokens }.
bear.track(asyncIterable, options?)
Wrap a streaming response. Returns a TrackedStream that passes chunks through unchanged.
Access usage data after consumption via await stream.result.
bear.track(null, options) (manual)
Track non-LLM services (TTS, image generation, etc.). Requires model in options.
Returns TrackResult with inputTokens: 0, outputTokens: 0.
bear.costs.byModel(params)
Get cost breakdown grouped by model. Params: { startDate, endDate }.
bear.costs.byProvider(params)
Get cost breakdown grouped by provider. Params: { startDate, endDate }.
bear.costs.byFeature(params)
Get cost breakdown grouped by feature. Params: { startDate, endDate }.
bear.flush()
Manually flush all queued events to the API.
bear.shutdown()
Flush all events and stop the background flush timer. Call before process exit.
TrackOptions
interface TrackOptions {
model?: string; // Model identifier (required for Bedrock, optional for others)
provider?: ProviderId; // Provider name override (use Provider.* constants)
feature?: string; // Feature label for attribution (e.g., 'chat', 'narration')
userId?: string; // End-user identifier for per-user cost tracking
units?: Record<string, number>; // Non-token units (characters, generations, etc.)
metadata?: Record<string, unknown>; // Custom metadata key-value pairs
}Error Handling
import { BearLumen, BearLumenApiError } from '@bearlumen/node-sdk';
const bear = new BearLumen({
apiKey: process.env.BEAR_LUMEN_API_KEY!,
onError: (error) => {
// Handle background flush errors
if (error instanceof BearLumenApiError) {
switch (error.code) {
case 'authentication_error':
console.error('Invalid API key');
break;
case 'rate_limit_exceeded':
console.error(`Rate limited, retry after ${error.retryAfter}s`);
break;
case 'network_error':
console.error('Network error -- events will be retried');
break;
default:
console.error(`API error: ${error.message}`);
}
}
},
});
// Cost queries throw directly (not background)
try {
const costs = await bear.costs.byModel({
startDate: '2025-01-01',
endDate: '2025-01-31',
});
} catch (error) {
if (error instanceof BearLumenApiError) {
console.error(`Error ${error.code}: ${error.message}`);
}
}Error Codes
| Code | Description |
|------|-------------|
| authentication_error | Invalid or missing API key |
| rate_limit_exceeded | Too many requests (check retryAfter) |
| network_error | Connection failed or timed out |
| server_error | Server-side error (5xx) |
| invalid_request | Bad request parameters (4xx) |
| not_found | Resource not found (404) |
TypeScript Support
This SDK is written in TypeScript and includes full type definitions.
import {
BearLumen,
Provider,
BearLumenConfig,
TrackOptions,
TrackResult,
CostQueryParams,
CostBreakdownResponse,
CostBreakdownItem,
BearLumenApiError,
} from '@bearlumen/node-sdk';Requirements
- Node.js >= 18.0.0
Documentation
Full documentation available at docs.bearlumen.com
License
MIT
