@cbuff/ai
v1.1.4
Published
Type-safe LLM wrapper with provider/model registry and optional cost tracking
Downloads
24
Maintainers
Readme
Type-safe AI client with model registry, lazy providers, and optional cost tracking—powered by Vercel AI SDK
Installation
bun add @cbuff/ai aiInstall the provider SDKs you need:
bun add @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/googleUsage
import { createAI } from "@cbuff/ai";
import { createOpenAI } from "@ai-sdk/openai";
const ai = createAI({
providers: {
openai: () => createOpenAI({ apiKey: process.env.OPENAI_API_KEY }),
anthropic: async () => {
const { createAnthropic } = await import("@ai-sdk/anthropic");
return createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
},
},
models: {
fast: { provider: "openai", id: "gpt-4o-mini" },
smart: { provider: "openai", id: "gpt-4o", costs: { input: 2.5, output: 10 } },
claude: { provider: "anthropic", id: "claude-sonnet-4-20250514", costs: { input: 3, output: 15 } },
},
});
// Full autocomplete on model names
const { data, metadata } = await ai.generate({
model: "fast",
prompt: "Hello, world!",
});
console.log(data);
console.log(metadata.totalCostUsd); // undefined if costs not configuredFeatures
- Type-safe model selection — Full autocomplete on model aliases, provider references validated at compile time
- Lazy provider loading — Providers can be sync or async, loaded on first use and cached
- Optional cost tracking — Define
costs: { input, output }per model (USD per 1M tokens), skip if you don't need it - Unified config — Single
createAI()call with providers and models in one object
Why not use Vercel's AI SDK directly?
Vercel's SDK is a great foundation, but this wrapper solves the gaps that show up once you use it across multiple models and providers:
- Model registry with autocomplete — Define named model aliases once and get compile-time safety everywhere you call
ai.generate. - Lazy provider wiring — Configure providers as sync or async factories so you only load SDKs when you actually need them.
- Built-in cost tracking — Attach per-model USD rates and get cost metadata back without extra bookkeeping.
- One config surface — Providers, models, and defaults live together instead of being stitched across call sites.
API
createAI(config)
Creates a typed AI client.
const ai = createAI({
providers: {
[key: string]: () => Provider | Promise<Provider>
},
models: {
[alias: string]: {
provider: string; // must match a key in providers
id: string; // provider model ID (typed per provider)
costs?: { input: number; output: number }; // USD per 1M tokens
}
}
});ai.generate(params)
Generate text or structured output.
const { data, metadata } = await ai.generate({
model: "fast", // required, autocompletes to your model aliases
prompt: "Hello", // required
system: "Be helpful", // optional
temperature: 0.7, // optional
maxOutputTokens: 1000, // optional
output: schema, // optional, for structured output
logKey: "my-request", // optional, logs timing and cost
});ai.models
Returns the list of registered model aliases in the order they were defined.
ai.models; // ["fast", "smart", "claude"]
type ModelList = typeof ai.models;
type Model = ModelList[number];Returns:
{
data: string | T, // text or structured output
metadata: {
responseTimeMs: number,
inputTokens: number,
outputTokens: number,
inputCostUsd?: number, // undefined if costs not configured
outputCostUsd?: number,
totalCostUsd?: number,
}
}License
MIT
