@spendlil/sdk
v0.1.1
Published
SpendLil SDK — drop-in AI proxy wrapper for cost tracking, compliance, and rate limiting
Maintainers
Readme
SpendLil TypeScript SDK
Drop-in wrapper for OpenAI, Anthropic, Google, and Mistral that routes all AI calls through your SpendLil proxy — giving you cost tracking, compliance controls, PII detection, rate limiting, and audit logs with zero code changes.
Install
npm install @spendlil/sdkQuick start
import { SpendLil } from '@spendlil/sdk';
const sl = new SpendLil({
agentId: 'your-agent-id', // from SpendLil dashboard
baseUrl: 'https://spendlil.yourco.com/api',
});
// ── OpenAI (drop-in replacement) ─────────────────────────
const openai = sl.openai({ apiKey: process.env.OPENAI_API_KEY! });
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);
console.log(response._spendlil); // { tracked: true, model: 'gpt-4o', tier: 'frontier', ... }
// ── Anthropic (drop-in replacement) ──────────────────────
const anthropic = sl.anthropic({ apiKey: process.env.ANTHROPIC_API_KEY! });
const msg = await anthropic.messages.create({
model: 'claude-sonnet-4-5',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Summarise this...' }],
});
console.log(msg.content[0].text);
// ── Auto-routing (SpendLil picks the best model) ─────────
const autoRes = await openai.chat.completions.autoCreate({
messages: [{ role: 'user', content: 'Write a haiku about TypeScript' }],
});
console.log(autoRes._spendlil?.model); // e.g. 'gpt-4o-mini' — SpendLil chose
// ── Streaming ─────────────────────────────────────────────
for await (const chunk of openai.chat.completions.createStream({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Count to 5' }],
stream: true,
})) {
const text = (chunk as any).choices?.[0]?.delta?.content ?? '';
process.stdout.write(text);
}
// ── Session / context management ─────────────────────────
const sessionSl = sl.withSession('user-session-abc123');
const sessionOpenai = sessionSl.openai({ apiKey: process.env.OPENAI_API_KEY! });
// All requests with this client share context in SpendLil (STICKY/POOL mode)SpendLil metadata
Every response includes _spendlil with proxy metadata:
| Field | Description |
|-------|-------------|
| tracked | Request was logged by SpendLil |
| model | Model actually used (may differ from requested due to routing) |
| provider | Provider used |
| tier | economy / standard / frontier |
| fallback | Whether a fallback model was used after a failure |
| escalated | Whether the request was auto-escalated to a better model |
| translated | Cross-provider format translation was applied |
| sessionKey | Active session key (if context management enabled) |
| contextMode | none / sticky / pool |
Error handling
import { SpendLilError } from '@spendlil/sdk';
try {
await openai.chat.completions.create({ model: 'gpt-4o', messages: [] });
} catch (err) {
if (err instanceof SpendLilError) {
console.error(`HTTP ${err.status}: ${err.message}`);
if (err.meta?.quotaExceeded) {
console.error('Agent quota exceeded');
}
}
}Batch / deferred processing
const response = await openai.chat.completions.create(
{ model: 'gpt-4o', messages: [...] },
{ priority: 'batch' }, // deferred to SpendLil batch queue
);
// Returns 202 with batch job ID