@openmonetize/sdk
v0.9.0
Published
Official TypeScript SDK for OpenMonetize - AI usage tracking and billing
Maintainers
Readme
@openmonetize/sdk
Official TypeScript/JavaScript SDK for OpenMonetize - AI usage tracking and consumption-based billing.
Installation
npm install @openmonetize/sdk
# or
yarn add @openmonetize/sdk
# or
pnpm add @openmonetize/sdkQuick Start
import { OpenMonetize } from "@openmonetize/sdk";
// Initialize the client
const client = new OpenMonetize({
apiKey: process.env.OPENMONETIZE_API_KEY!,
});
// Track token usage (automatically batched)
client.trackTokenUsage({
user_id: "law-firm-a",
customer_id: "legalai-corp",
feature_id: "legal-research",
provider: "OPENAI",
model: "gpt-4",
input_tokens: 1000,
output_tokens: 500,
});
// Flush events before process exit (optional, events are auto-flushed)
await client.flush();Features
- ✅ Type-Safe - Full TypeScript support with comprehensive type definitions
- ✅ AI Proxy Support - Zero-code integration via
getProxyConfig()helper - ✅ Easy Integration - Drop-in helpers for OpenAI, Anthropic, Google Gemini, and more
- ✅ Real-Time Tracking - Track AI usage as it happens
- ✅ No Redundant IDs - API key determines customer identity automatically
- ✅ Credit Management - Check balances, purchase credits, view transactions
- ✅ Entitlements - Gate features based on credit balance
- ✅ Analytics - Get usage insights and cost breakdowns
- ✅ Batch Processing - Efficient bulk event tracking
- ✅ Error Handling - Proper error types and retry logic
🚀 AI Proxy (Easiest Integration)
The fastest way to add billing—just change your baseURL!
Use the SDK helper to configure your AI client for automatic billing:
import Anthropic from "@anthropic-ai/sdk";
import { getProxyConfig } from "@openmonetize/sdk";
// Cloud SaaS: Uses https://proxy.openmonetize.io by default
const proxyConfig = getProxyConfig({
openmonetizeApiKey: process.env.OPENMONETIZE_API_KEY!,
customerId: "cust_your-company",
userId: "user_current-user",
featureId: "ai-chat",
});
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY!,
baseURL: proxyConfig.baseURL,
defaultHeaders: proxyConfig.headers,
});
// Use Anthropic normally - billing is automatic!
const message = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello!" }],
});Default Proxy URLs:
| Environment | URL |
|-------------|-----|
| Cloud SaaS (default) | https://proxy.openmonetize.io/v1 |
| Local Development | http://localhost:8082/v1 |
| Self-Hosted | Your custom URL |
For local development, pass proxyBaseUrl: 'http://localhost:8082/v1'.
Supported Providers:
| Provider | Proxy Endpoint |
|----------|----------------|
| OpenAI | POST /v1/chat/completions |
| Anthropic | POST /v1/messages |
| Gemini | POST /v1beta/models/:model:generateContent |
Testing with Sandbox
The easiest way to verify your integration is using the OpenMonetize Sandbox. It visualizes the flow of data and helps you debug issues in real-time.
Local Development
If you are running the platform locally:
- Go to
http://localhost:3002/sandbox. - Use the Integration Code tab to copy the snippet for your specific use case.
- Run your code and watch the Live Logs in the sandbox to see the event travel through the system.
Cloud Sandbox
If you are using the managed OpenMonetize cloud:
- Log in to the Console.
- Navigate to the Sandbox tab.
- Use your production API key to test events against the live system.
Usage Examples
1. OpenAI Integration
import OpenAI from "openai";
import { OpenMonetize, withOpenAITracking } from "@openmonetize/sdk";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const monitor = new OpenMonetize({ apiKey: process.env.OPENMONETIZE_API_KEY });
// Wrap OpenAI calls with automatic tracking
const response = await withOpenAITracking(
monitor,
() =>
openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Write a legal brief" }],
}),
{
customerId: "legalai-corp",
userId: "law-firm-a",
featureId: "legal-research",
},
);
// Usage is automatically tracked!
console.log(response.choices[0].message.content);2. Anthropic Integration
import Anthropic from "@anthropic-ai/sdk";
import { OpenMonetize, withAnthropicTracking } from "@openmonetize/sdk";
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const monitor = new OpenMonetize({ apiKey: process.env.OPENMONETIZE_API_KEY });
const response = await withAnthropicTracking(
monitor,
() =>
anthropic.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 1024,
messages: [{ role: "user", content: "Analyze this contract" }],
}),
{
customerId: "legalai-corp",
userId: "law-firm-a",
featureId: "contract-analysis",
},
);3. Google (Gemini) Integration
import { GoogleGenerativeAI } from "@google/generative-ai";
import { OpenMonetize, withGoogleTracking } from "@openmonetize/sdk";
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-pro" });
const monitor = new OpenMonetize({ apiKey: process.env.OPENMONETIZE_API_KEY });
const response = await withGoogleTracking(
monitor,
() => model.generateContent("Write a story about a magic backpack."),
{
customerId: "legalai-corp",
userId: "law-firm-a",
featureId: "story-generation",
model: "gemini-pro",
},
);
console.log(response.response.text());4. Image Generation Tracking
// Track usage for image generation models like DALL-E 3 or Midjourney
client.trackImageGeneration({
user_id: "law-firm-a",
customer_id: "legalai-corp",
feature_id: "marketing-assets",
provider: "OPENAI",
model: "dall-e-3",
image_count: 1,
image_size: "1024x1024",
quality: "hd",
});5. Custom Event Tracking
// Track any custom unit for outcome-based metering
client.trackCustomEvent({
user_id: "law-firm-a",
customer_id: "legalai-corp",
feature_id: "video-transcription",
unit_type: "minutes",
quantity: 15.5,
metadata: {
file_id: "vid_123",
language: "en-US",
},
});6. Check Credit Balance Before API Call
// Check if user has enough credits
const balance = await client.getCreditBalance('user-123');
if (balance.available < 100) {
console.log('Insufficient credits!');
console.log(`Current balance: ${balance.available} credits`);
console.log(`Reserved: ${balance.reserved} credits`);
return;
}
// Proceed with AI call
await openai.chat.completions.create({...});7. Real-Time Entitlement Check
// Check if user can perform an action BEFORE executing it
const entitlement = await client.checkEntitlement({
user_id: 'user-123',
feature_id: 'legal-research',
action: {
type: 'token_usage',
provider: 'openai',
model: 'gpt-4',
estimated_input_tokens: 1000,
estimated_output_tokens: 500
}
});
if (!entitlement.allowed) {
console.log(`Access denied: ${entitlement.reason}`);
console.log(`Current balance: ${entitlement.current_balance} credits`);
console.log(`Estimated cost: ${entitlement.estimated_cost_credits} credits`);
// Show upgrade options
entitlement.actions.forEach(action => {
console.log(`${action.label}: ${action.url}`);
});
return;
}
// User has sufficient credits, proceed
await openai.chat.completions.create({...});7b. Video Generation Entitlement Check
// Check if user can generate a video BEFORE executing it
const entitlement = await client.checkEntitlement({
userId: "user-123",
featureId: "video-generation",
action: {
type: "video_generation",
provider: "GOOGLE",
model: "veo-3.1-generate-video",
estimatedDurationSeconds: 8,
},
});
if (!entitlement.allowed) {
console.log(`Access denied: ${entitlement.reason}`);
console.log(`Estimated cost: ${entitlement.estimatedCostCredits} credits`);
return;
}
// User has sufficient credits, proceed with video generation
await generateVideo({ model: "veo-3.1-generate-video", duration: 8 });7c. Image Generation Entitlement Check
// Check if user can generate images BEFORE executing it
const entitlement = await client.checkEntitlement({
userId: 'user-123',
featureId: 'image-generation',
action: {
type: 'image_generation',
provider: 'OPENAI',
model: 'dall-e-3',
estimatedCount: 4,
imageSize: '1024x1024',
quality: 'hd',
}
});
if (!entitlement.allowed) {
console.log(`Access denied: ${entitlement.reason}`);
return;
}
// Proceed with image generation
await openai.images.generate({ model: 'dall-e-3', n: 4, ... });8. Batch Event Tracking
Note: All tracking methods (
trackTokenUsage,trackImageGeneration, etc.) are automatically batched by default to optimize performance. You only need to useBatchTrackerif you want manual control over the batching process.
9. Manual Batching
import { BatchTracker } from "@openmonetize/sdk";
const tracker = new BatchTracker(client);
// Add multiple events
tracker.add({
customerId: "legalai-corp",
userId: "law-firm-a",
featureId: "document-review",
provider: "OPENAI",
model: "gpt-4",
inputTokens: 1000,
outputTokens: 500,
});
tracker.add({
customerId: "legalai-corp",
userId: "law-firm-b",
featureId: "contract-analysis",
provider: "ANTHROPIC",
model: "claude-3-sonnet",
inputTokens: 2000,
outputTokens: 1000,
});
// Flush all events at once
await tracker.flush();
console.log(`Tracked ${tracker.pending} events`);10. Purchase Credits
// Top up user's credit balance
const result = await client.purchaseCredits({
user_id: "user-123",
amount: 10000,
purchase_price: 99.99,
expires_at: "2025-12-31T23:59:59Z", // Optional
});
console.log(`Transaction ID: ${result.transaction_id}`);
console.log(`New balance: ${result.new_balance} credits`);11. Get Usage Analytics
const analytics = await client.getUsageAnalytics({
user_id: "user-123", // Optional: filter by user
start_date: "2025-01-01T00:00:00Z",
end_date: "2025-01-31T23:59:59Z",
});
console.log(`Total credits used: ${analytics.total_credits}`);
console.log(`Total cost: $${analytics.total_cost_usd}`);
// By provider
Object.entries(analytics.by_provider).forEach(([provider, stats]) => {
console.log(`${provider}: ${stats.credits} credits (${stats.percentage}%)`);
});
// By model
Object.entries(analytics.by_model).forEach(([model, stats]) => {
console.log(`${model}: $${stats.cost_usd}`);
});12. Calculate Cost Before Execution
// Preview cost before making the API call
const cost = await client.calculateCost({
provider: 'OPENAI',
model: 'gpt-4',
input_tokens: 1000,
output_tokens: 500
});
console.log(`This will cost ${cost.credits} credits`);
console.log(`Provider cost: $${cost.provider_cost_usd}`);
console.log(`Your cost: $${cost.cost_breakdown.total_cost_usd}`);
console.log(`Margin: ${cost.margin_percent}%`);
// Ask user for confirmation
if (await confirmWithUser(cost.credits)) {
await openai.chat.completions.create({...});
}13. Get Model Pricing
// Get all available model pricing
const pricing = await client.getPricing();
pricing.data.forEach((item) => {
console.log(`${item.provider} - ${item.model}:`);
if (item.pricing.input_token) {
console.log(
` Input: $${item.pricing.input_token.costPerUnit} per ${item.pricing.input_token.unitSize} tokens`,
);
}
if (item.pricing.output_token) {
console.log(
` Output: $${item.pricing.output_token.costPerUnit} per ${item.pricing.output_token.unitSize} tokens`,
);
}
if (item.pricing.video) {
console.log(
` Video: $${item.pricing.video.costPerUnit} per ${item.pricing.video.unitSize} seconds`,
);
}
});14. Calculate Video/Image Cost
// Calculate cost for video generation (no tokens needed!)
const videoCost = await client.calculateCost({
provider: "GOOGLE",
model: "veo-3.0",
type: "video",
count: 1, // Number of videos
});
console.log(`Video generation will cost ${videoCost.credits} credits`);
console.log(`Provider cost: $${videoCost.providerCostUsd}`);
// Calculate cost for image generation
const imageCost = await client.calculateCost({
provider: "OPENAI",
model: "dall-e-3",
type: "image",
count: 4, // Number of images
});
console.log(`4 images will cost ${imageCost.credits} credits`);15. Transaction History
const history = await client.getTransactionHistory("user-123", {
limit: 50,
offset: 0,
});
history.data.forEach((tx) => {
console.log(`${tx.transaction_type}: ${tx.amount} credits`);
console.log(`Balance: ${tx.balance_before} → ${tx.balance_after}`);
console.log(`Date: ${tx.created_at}`);
});
console.log(`Total transactions: ${history.pagination.total}`);16. Complete Pre-Action Credit Check Flow
// Full example: Check cost, verify balance, and execute action
async function generateVideoWithCreditCheck(userId: string) {
const client = new OpenMonetize({
apiKey: process.env.OPENMONETIZE_API_KEY!,
});
// Step 1: Calculate the cost for this action
const cost = await client.calculateCost({
provider: "GOOGLE",
model: "veo-3.0",
type: "video",
count: 1,
});
console.log(`This action will cost ${cost.credits} credits`);
// Step 2: Check if user has enough credits
const balance = await client.getCreditBalance(userId);
if (balance.available < cost.credits) {
return {
success: false,
error: "Insufficient credits",
required: cost.credits,
available: balance.available,
};
}
// Step 3: Execute the action (your AI provider call)
const video = await generateVideo({ model: "veo-3.0", prompt: "..." });
// Step 4: Track the usage (auto-batched)
client.trackVideoGeneration({
userId,
customerId: "your-customer-id",
featureId: "video-generation",
provider: "GOOGLE",
model: "veo-3.0",
durationSeconds: 10,
});
return { success: true, video };
}Configuration
Environment Variables
# Required
OPENMONETIZE_API_KEY=your_api_key_here
# Optional - for debugging
OPENMONETIZE_BASE_URL=http://localhost:3000 # Default: https://api.openmonetize.ioClient Options
const client = new OpenMonetize({
apiKey: "your_api_key",
baseUrl: "https://api.openmonetize.io", // Optional
timeout: 30000, // Optional (ms)
debug: false, // Optional
});Error Handling
import { OpenMonetizeError } from '@openmonetize/sdk';
try {
await client.trackTokenUsage({...});
} catch (error) {
if (error instanceof OpenMonetizeError) {
console.error('OpenMonetize error:', error.message);
console.error('Status code:', error.statusCode);
console.error('Response:', error.response);
} else {
console.error('Unexpected error:', error);
}
}TypeScript Support
The SDK is written in TypeScript and provides full type definitions:
import type {
TokenUsageEvent,
CreditBalance,
EntitlementCheckResponse,
UsageAnalyticsResponse
} from '@openmonetize/sdk';
// All API responses are fully typed
const balance: CreditBalance = await client.getCreditBalance(...);
const analytics: UsageAnalyticsResponse = await client.getUsageAnalytics(...);Best Practices
1. Always Check Entitlements First
// ✅ Good: Check before executing
const entitlement = await client.checkEntitlement(...);
if (entitlement.allowed) {
await executeAIOperation();
}
// ❌ Bad: Execute without checking
await executeAIOperation(); // User might not have credits!2. Use Batch Tracking for High Volume
// ✅ Good: Batch multiple events
const tracker = new BatchTracker(client);
for (const event of events) {
tracker.add(event);
}
await tracker.flush();
// ❌ Bad: Individual requests
for (const event of events) {
await client.trackTokenUsage(event); // Too many API calls!
}3. Handle Errors Gracefully
// ✅ Good: Async tracking with error handling
client.trackTokenUsage({...}).catch(error => {
console.error('Failed to track usage:', error);
// Maybe retry or log to monitoring service
});
// ❌ Bad: Blocking user experience
await client.trackTokenUsage({...}); // Don't block on tracking!API Reference
Client Methods
| Method | Description |
| ----------------------------------------------- | ------------------------------------------------- |
| trackTokenUsage(params) | Track LLM token consumption (sync, auto-batched) |
| trackImageGeneration(params) | Track image generation usage (sync, auto-batched) |
| trackCustomEvent(params) | Track custom usage events (sync, auto-batched) |
| flush() | Flush pending events to API (returns Promise) |
| ingestEvents(request) | Directly ingest batch of events (returns Promise) |
| getCreditBalance(userId) | Get user's credit balance |
| purchaseCredits(request) | Purchase credits for a user |
| getTransactionHistory(userId, options?) | Get transaction history |
| checkEntitlement(request) | Check if user can perform an action |
| calculateCost(request) | Preview operation cost before execution |
| getUsageAnalytics(request) | Get usage analytics |
| getCostBreakdown(startDate, endDate) | Get cost breakdown by provider/model |
| normalizeProviderResponse(provider, response) | Extract token counts from provider responses |
Helper Functions
| Function | Description |
| -------------------------------------------- | ---------------------------------------------------------- |
| getProxyConfig(params) | Generate proxy config (baseURL + headers) for AI clients |
| createBilledOpenAIConfig(params) | Create OpenAI client config for automatic proxy billing |
| withOpenAITracking(client, fn, context) | Wrap OpenAI calls with automatic tracking |
| withAnthropicTracking(client, fn, context) | Wrap Anthropic calls with automatic tracking |
| withGoogleTracking(client, fn, context) | Wrap Google Gemini calls with automatic tracking |
| trackUsage(client, fn, tracking) | Generic tracking wrapper for any provider |
| BatchTracker | Manual batch event processing class |
| formatCredits | Credit formatting utilities (number, withLabel, usd) |
| validateConfig(config) | Validate environment configuration |
Support
- Documentation: https://docs.openmonetize.io
- Discord: https://discord.gg/openmonetize
- GitHub: https://github.com/openmonetize/sdk
- Email: [email protected]
License
Apache 2.0 - See LICENSE for details.
Made with ❤️ by the OpenMonetize team
