@commet/ai-sdk
v0.1.1
Published
Commet billing middleware for Vercel AI SDK
Maintainers
Readme
Installation
npm install @commet/ai-sdk @commet/nodeQuick Start
Wrap any AI SDK model with commetAI to automatically track token usage through Commet.
import { Commet } from '@commet/node';
import { commetAI } from '@commet/ai-sdk';
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const commet = new Commet({ apiKey: process.env.COMMET_API_KEY });
const result = await generateText({
model: commetAI(openai('gpt-4o'), {
commet,
feature: 'ai_chat',
customerId: 'cus_123', // or an external ID like 'user_123'
}),
prompt: 'Hello!',
});Every generateText and streamText call automatically reports input tokens, output tokens, and cache tokens to Commet.
Streaming
Works the same way with streamText — usage is reported when the stream finishes.
import { streamText } from 'ai';
const result = streamText({
model: commetAI(openai('gpt-4o'), {
commet,
feature: 'ai_chat',
customerId: 'cus_123',
}),
prompt: 'Explain billing models',
});Options
| Option | Type | Required | Description |
|--------|------|----------|-------------|
| commet | Commet | Yes | Commet SDK instance |
| feature | GeneratedFeatureCode | Yes | Feature code to track usage against |
| customerId | string | Yes | Commet customer ID (cus_*) or external ID |
| idempotencyKey | string | No | Prevent duplicate tracking for retries |
| onTrackingError | (error: Error) => void | No | Custom error handler. Defaults to console.warn |
How It Works
commetAI wraps the model with AI SDK middleware that intercepts generate and stream completions. After each call, it reports to Commet:
- Input tokens (including cache read/write breakdown)
- Output tokens
- Model ID (automatically detected)
Tracking is fire-and-forget — it never blocks or delays your AI responses.
Requirements
ai>= 6.0.0@commet/node>= 1.6.0
Documentation
Visit commet.co/docs for the full guide.
License
MIT
