@varlabs/ai
v0.1.2
Published
AI sdk for interfacing with AI models
Readme
@varlabs/ai
A minimal, type-safe toolkit for building multi-model AI applications with standardized, composable interfaces.
Runtime-agnostic. Framework-agnostic. Model-agnostic.
Features
Provider Abstraction
Define and compose AI providers (OpenAI, Anthropic, custom, etc.) with a unified, type-safe interface.Custom Providers
Easily create your own providers or install community/third-party providers from npm.Multi-Model Support
Organize and call multiple models (text, image, speech, etc.) under each provider.Type Safety
All model calls and provider definitions are fully typed for maximum safety and IDE support.Middleware
Add cross-cutting logic (logging, auth, rate limiting, etc.) to all model calls via middleware.Streaming Utilities
Handle streaming AI responses (e.g., Server-Sent Events) in a runtime-agnostic way.Minimal & Composable
No runtime dependencies. Designed for easy extension and integration.JS Runtime & Framework Agnostic
Works in Node.js, edge runtimes, serverless, and browsers. No framework assumptions.
Core Concepts
Provider Definition
Define a provider with models and context (e.g., API keys):
import { defineProvider } from '@varlabs/ai/provider';
const myProvider = defineProvider({
name: 'my-provider',
context: { config: { apiKey: '...' } },
models: {
text: {
generate: async (input, ctx) => { /* ... */ }
}
}
});Creating a Client
Compose multiple providers and add middleware:
import { createAIClient } from '@varlabs/ai';
const client = createAIClient({
providers: {
openai: openAIProvider({ config: { apiKey: 'sk-...' } }),
anthropic: anthropicProvider({ config: { apiKey: 'sk-...' } }),
// You can also install and use providers from npm:
// myCustomProvider: require('my-ai-provider')({ config: { ... } }),
},
middleware: [
async (ctx) => { /* e.g., logging, auth */ return true; }
]
});
// Usage:
await client.openai.text.generate({ prompt: 'Hello world' });Middleware
Middleware receives the call context and can block or allow execution:
const logger = async (ctx) => {
console.log('AI call:', ctx.provider, ctx.model, ctx.call);
return true;
};Utilities
Cosine Similarity
cosineSimilarity: Compare vector similarity.Streaming
handleStreamResponse: Parse streaming AI responses (SSE, JSON lines, etc.).createDataStream: Create a readable stream for streaming data.pipeStreamToResponse: Pipe a stream to Node.js or Web Response objects.Structure & Tooling
defineStructure: Type-safe schema definition.defineTool: Standardize tool/external function definitions.
Example: Adding a New Provider
You can define your own provider or install one from npm:
// Custom provider
const myProvider = defineProvider({
name: 'custom',
context: { config: { apiKey: '...' } },
models: {
text: {
generate: async (input, ctx) => {
// Call your API here
return { result: '...' };
}
}
}
});
// Or install a provider from npm
import { someProvider } from 'some-ai-provider';
const providerInstance = someProvider({ config: { apiKey: '...' } });Philosophy
- Minimal surface area: Only the primitives you need.
- No runtime lock-in: Use with any JS runtime or framework.
- Type safety first: All interfaces are strongly typed.
- Composable: Mix and match providers, models, and middleware.
API Reference
defineProvider– Create a provider with models and context.createAIClient– Compose providers and middleware into a unified client.cosineSimilarity– Vector similarity.handleStreamResponse– Streaming response parser.createDataStream– Streaming data generator.pipeStreamToResponse– Pipe streams to responses.defineStructure– Schema definition.defineTool– Tool definition.
License
MIT © Hamza Varvani
