@lasercat/homogenaize
v1.2.41
Published
A TypeScript-native library that provides a unified interface for multiple LLM providers (OpenAI, Anthropic, Gemini)
Downloads
475
Readme
Homogenaize
A TypeScript-native library that provides a unified interface for multiple LLM providers (OpenAI, Anthropic, Gemini), with full type safety and runtime validation using Zod and JSON Schema.
Features
- 🔥 Unified API - Single interface for OpenAI, Anthropic, and Gemini
- 🛡️ Type Safety - Full TypeScript support with provider-specific features
- 🎨 Typed Model Names - Autocomplete and compile-time validation for model names
- ✅ Runtime Validation - Zod schemas and JSON Schema for structured outputs
- 📐 JSON Schema Support - Use typed
JSONSchemaType<T>or generic JSON Schema alongside Zod - 🔄 Streaming Support - Async iterators for real-time responses
- 🛠️ Tool Calling - Define and execute tools with automatic validation
- 🎯 Provider Features - Access provider-specific capabilities while maintaining type safety
- 🔁 Retry Logic - Built-in exponential backoff with configurable retry strategies
- 🚫 Request Cancellation - Cancel in-flight requests and retry loops with AbortSignal
- 📋 Model Discovery - List available models for each provider
- 🧠 Thinking Tokens - Support for Anthropic's thinking tokens feature
- 📊 Structured Logging - Configurable Winston logging with automatic sensitive data redaction
Installation
bun add homogenaize
# or
npm install homogenaize
# or
yarn add homogenaizeQuick Start
import { createLLM, createOpenAILLM, createAnthropicLLM, createGeminiLLM } from 'homogenaize';
// Option 1: Generic client (recommended for flexibility)
const client = createLLM({
provider: 'openai', // or 'anthropic' or 'gemini'
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini', // ✨ Typed! Autocompletes valid models
});
// Option 2: Provider-specific clients (for better type hints)
const openai = createOpenAILLM({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini', // ✨ Only OpenAI models allowed
});
const anthropic = createAnthropicLLM({
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-sonnet-20240229', // ✨ Only Anthropic models allowed
});
const gemini = createGeminiLLM({
apiKey: process.env.GEMINI_API_KEY!,
model: 'gemini-2.5-flash', // ✨ Only Gemini models allowed
});
// Use the same interface for all providers
const response = await client.chat({
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Hello!' },
],
temperature: 0.7,
});
console.log(response.content);Generic API (Provider Type Pollution Avoidance)
If you want to avoid provider types spreading throughout your codebase (at the cost of compile-time model validation), use the Generic API:
import {
createGenericLLM,
createGenericOpenAI,
createGenericAnthropic,
createGenericGemini,
} from 'homogenaize';
// Generic API - no provider type parameters needed
const client = createGenericLLM({
provider: 'openai', // Runtime provider selection
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4', // Any string accepted (no compile-time validation)
});
// Provider-specific generic factories
const openai = createGenericOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4', // Any string model name
});
// Switch providers at runtime without type changes
function createClient(provider: string) {
return createGenericLLM({
provider: provider as any,
apiKey: getApiKey(provider),
model: getModel(provider),
});
}
// Same interface, no type pollution
const response = await client.chat({
messages: [{ role: 'user', content: 'Hello!' }],
});
// Still supports all features (schemas, tools, streaming)
const structuredResponse = await client.chat({
messages: [{ role: 'user', content: 'Generate data' }],
schema: MyZodSchema, // Still works with Zod/JSON Schema
});When to Use Generic vs Type-Safe API
Use the Type-Safe API when:
- You want compile-time validation of model names
- You need IDE autocomplete for provider-specific features
- You're working with a single provider
- Type safety is more important than flexibility
Use the Generic API when:
- You need to switch providers dynamically at runtime
- You want to avoid provider types in your function signatures
- You're building provider-agnostic abstractions
- You're willing to trade compile-time safety for flexibility
Structured Outputs
Define schemas using Zod or JSON Schema and get validated, typed responses from any provider:
Using Zod Schemas
import { z } from 'zod';
import { createLLM } from 'homogenaize';
const PersonSchema = z.object({
name: z.string(),
age: z.number(),
occupation: z.string(),
hobbies: z.array(z.string()),
});
const client = createLLM({
provider: 'openai', // or 'anthropic' or 'gemini'
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
});
// Get validated, typed responses
const response = await client.chat({
messages: [{ role: 'user', content: 'Generate a random person profile' }],
schema: PersonSchema,
});
// response.content is fully typed as { name: string, age: number, occupation: string, hobbies: string[] }
console.log(response.content.name); // TypeScript knows this is a string
console.log(response.content.hobbies[0]); // TypeScript knows this is a string[]Using JSON Schema
You can use JSON Schema with full TypeScript type safety using AJV's JSONSchemaType:
import type { JSONSchemaType } from 'ajv';
import { createLLM } from 'homogenaize';
interface PersonData {
name: string;
age: number;
occupation: string;
hobbies: string[];
}
// Typed JSON Schema - provides compile-time type checking
const PersonSchema: JSONSchemaType<PersonData> = {
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' },
occupation: { type: 'string' },
hobbies: {
type: 'array',
items: { type: 'string' },
},
},
required: ['name', 'age', 'occupation', 'hobbies'],
additionalProperties: false,
};
const client = createLLM({
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-sonnet-20240229',
});
// Get validated, typed responses with JSON Schema
const response = await client.chat({
messages: [{ role: 'user', content: 'Generate a random person profile' }],
schema: PersonSchema,
});
// response.content is fully typed as PersonData
console.log(response.content.name); // TypeScript knows this is a string
console.log(response.content.age); // TypeScript knows this is a numberUsing Generic JSON Schema
For dynamic schemas or when type safety isn't required:
import { createLLM } from 'homogenaize';
// Generic JSON Schema without compile-time type checking
const DynamicSchema = {
type: 'object',
properties: {
result: { type: 'string' },
confidence: { type: 'number', minimum: 0, maximum: 1 },
tags: {
type: 'array',
items: { type: 'string' },
},
},
required: ['result', 'confidence'],
};
const client = createLLM({
provider: 'gemini',
apiKey: process.env.GEMINI_API_KEY!,
model: 'gemini-2.5-flash',
});
const response = await client.chat({
messages: [{ role: 'user', content: 'Analyze this text and provide results' }],
schema: DynamicSchema,
});
// response.content is typed as unknown when using generic schemas
// You'll need to cast or validate the type yourself
const data = response.content as {
result: string;
confidence: number;
tags?: string[];
};
console.log(data.result);Streaming Responses
const stream = await client.stream({
messages: [{ role: 'user', content: 'Write a short story' }],
maxTokens: 1000,
});
// Stream chunks as they arrive
for await (const chunk of stream) {
process.stdout.write(chunk);
}
// Get the complete response with usage stats
const complete = await stream.complete();
console.log(`Total tokens used: ${complete.usage.totalTokens}`);Tool Calling
// Define a tool with schema validation
const weatherTool = client.defineTool({
name: 'get_weather',
description: 'Get the current weather for a location',
schema: z.object({
location: z.string().describe('City and country'),
units: z.enum(['celsius', 'fahrenheit']).optional(),
}),
execute: async (params) => {
// Your implementation here
return { temperature: 22, condition: 'sunny', location: params.location };
},
});
// Let the model decide when to use tools
const response = await client.chat({
messages: [{ role: 'user', content: "What's the weather in Paris?" }],
tools: [weatherTool],
toolChoice: 'auto', // or 'required' to force tool use
});
// Execute any tool calls
if (response.toolCalls) {
const results = await client.executeTools(response.toolCalls);
console.log('Tool results:', results);
// Example result:
// [
// {
// toolCallId: 'call-123',
// toolName: 'get_weather',
// result: { temperature: 22, condition: 'sunny', location: 'Paris' }
// }
// ]
}Tool Calls with Streaming
Tool calls work seamlessly with streaming. When the model decides to call a tool during a streaming request, the tool calls are available after calling complete():
const weatherTool = client.defineTool({
name: 'get_weather',
description: 'Get the current weather for a location',
schema: z.object({
location: z.string().describe('City and country'),
}),
execute: async (params) => {
return { temperature: 22, condition: 'sunny', location: params.location };
},
});
// Start a streaming request with tools
const stream = await client.stream({
messages: [{ role: 'user', content: "What's the weather in Paris and London?" }],
tools: [weatherTool],
toolChoice: 'auto',
});
// Stream any text content (may be empty if model only calls tools)
for await (const chunk of stream) {
process.stdout.write(chunk);
}
// Get the complete response including tool calls
const complete = await stream.complete();
// Check for tool calls
if (complete.toolCalls) {
console.log(`Model called ${complete.toolCalls.length} tool(s)`);
// Execute the tools
const results = await client.executeTools(complete.toolCalls);
// Each result contains:
// - toolCallId: unique identifier for this call
// - toolName: which tool was called
// - result: the return value from execute()
// - error?: any error message if execution failed
for (const result of results) {
console.log(`${result.toolName}: ${JSON.stringify(result.result)}`);
}
}
// Token usage is also available
console.log(`Total tokens: ${complete.usage.totalTokens}`);Note: During streaming, text content is yielded as chunks, but tool calls are only available after calling complete(). This is because tool call arguments are streamed incrementally and must be fully assembled before they can be parsed and executed.
List Available Models
Discover available models for each provider:
// List models for a specific provider
const models = await client.listModels();
// Example response
[
{ id: 'gpt-4', name: 'gpt-4', created: 1687882411 },
{ id: 'gpt-3.5-turbo', name: 'gpt-3.5-turbo', created: 1677610602 },
// ... more models
];
// Use the scripts to list all models across providers
// Run: bun run list-models
// Output: JSON with all models from all configured providers
// Or list only chat models
// Run: bun run list-chat-models
// Output: Filtered list of chat-capable modelsRetry Configuration
Configure automatic retries with exponential backoff:
import { createLLM } from 'homogenaize';
const client = createLLM({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
retry: {
maxRetries: 3, // Maximum number of retry attempts (default: 3)
initialDelay: 1000, // Initial delay in ms (default: 1000)
maxDelay: 60000, // Maximum delay in ms (default: 60000)
backoffMultiplier: 2, // Exponential backoff multiplier (default: 2)
jitter: true, // Add randomness to delays (default: true)
onRetry: (attempt, error, delay) => {
console.log(`Retry attempt ${attempt} after ${delay}ms due to:`, error.message);
},
},
});
// The client will automatically retry on:
// - Rate limit errors (429)
// - Server errors (5xx)
// - Network errors (ECONNRESET, ETIMEDOUT, etc.)
// - Provider-specific transient errors
// You can also customize which errors trigger retries
const customClient = createLLM({
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-sonnet-20240229',
retry: {
maxRetries: 5,
retryableErrors: (error) => {
// Custom logic to determine if an error should be retried
return error.message.includes('temporary') || error.status === 503;
},
},
});Request Cancellation
Cancel in-flight requests and retry loops using AbortSignal:
import { createLLM } from 'homogenaize';
const client = createLLM({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
retry: {
maxRetries: 3,
initialDelay: 1000,
},
});
// Create an AbortController
const controller = new AbortController();
// Pass the signal to your request
const responsePromise = client.chat({
messages: [{ role: 'user', content: 'Write a long essay about AI' }],
signal: controller.signal, // Pass the abort signal
});
// Cancel from anywhere (e.g., user clicks cancel button)
setTimeout(() => {
controller.abort(); // Cancels the request immediately
}, 5000);
try {
const response = await responsePromise;
console.log(response.content);
} catch (error) {
if (error.name === 'AbortError') {
console.log('Request was cancelled by user');
} else {
console.error('Request failed:', error);
}
}Abort During Retries
The abort signal works seamlessly with retry logic, cancelling even during backoff delays:
const controller = new AbortController();
// This request will retry on errors
const promise = client.chat({
messages: [{ role: 'user', content: 'Hello' }],
signal: controller.signal,
});
// Even if the request is retrying, it will abort immediately
controller.abort();
// The promise will reject with an AbortError
await promise; // Throws AbortErrorAbort Streaming Requests
Abort signals work with streaming as well:
const controller = new AbortController();
const stream = await client.stream({
messages: [{ role: 'user', content: 'Write a long story' }],
signal: controller.signal,
});
// Start consuming the stream
(async () => {
try {
for await (const chunk of stream) {
process.stdout.write(chunk);
}
} catch (error) {
if (error.name === 'AbortError') {
console.log('\nStream cancelled');
}
}
})();
// Cancel the stream after 2 seconds
setTimeout(() => controller.abort(), 2000);Features
- ✅ Cancels fetch requests immediately
- ✅ Breaks out of retry loops instantly
- ✅ Cancels backoff delays between retries
- ✅ Works with both streaming and non-streaming requests
- ✅ Compatible with all providers (OpenAI, Anthropic, Gemini)
- ✅ Fully backward compatible (signal parameter is optional)
Provider-Specific Features
Access provider-specific features while maintaining type safety:
// OpenAI-specific features
const openaiResponse = await openai.chat({
messages: [{ role: 'user', content: 'Hello' }],
features: {
logprobs: true,
topLogprobs: 2,
seed: 12345,
},
});
// Access logprobs if available
if (openaiResponse.logprobs) {
console.log('Token probabilities:', openaiResponse.logprobs);
}
// Anthropic-specific features
const anthropicResponse = await anthropic.chat({
messages: [{ role: 'user', content: 'Hello' }],
features: {
thinking: true,
cacheControl: true,
},
});
// Gemini-specific features
const geminiResponse = await gemini.chat({
messages: [{ role: 'user', content: 'Hello' }],
features: {
safetySettings: [
{
category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
threshold: 'BLOCK_ONLY_HIGH',
},
],
},
});Thinking Tokens (Anthropic)
Anthropic's thinking tokens feature allows Claude to show its reasoning process before generating a response. This is particularly useful for complex problem-solving tasks.
import { createAnthropicLLM } from 'homogenaize';
const anthropic = createAnthropicLLM({
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-opus-20240229',
});
// Enable thinking tokens
const response = await anthropic.chat({
messages: [
{
role: 'user',
content:
'Solve this step by step: If a train travels at 60 mph for 2.5 hours, how far does it go?',
},
],
features: {
thinking: true,
maxThinkingTokens: 1000, // Optional: limit thinking tokens
},
});
// Access the thinking process
if (response.thinking) {
console.log("Claude's thought process:", response.thinking);
}
console.log('Final answer:', response.content);
// Example output:
// Claude's thought process: "I need to calculate distance using the formula distance = speed × time. Speed is 60 mph, time is 2.5 hours..."
// Final answer: "The train travels 150 miles."Thinking Tokens in Streaming
When streaming, thinking tokens are handled separately and won't be yielded as part of the regular content stream:
const stream = await anthropic.stream({
messages: [{ role: 'user', content: 'Explain quantum entanglement' }],
features: {
thinking: true,
},
});
// Regular content stream (no thinking tokens here)
for await (const chunk of stream) {
process.stdout.write(chunk);
}
// Get thinking tokens from the complete response
const complete = await stream.complete();
if (complete.thinking) {
console.log('\nThought process:', complete.thinking);
}Note: Thinking tokens are only available with Anthropic's Claude models and require specific model versions that support this feature.
Logging
Homogenaize includes a powerful logging system built on Winston that provides detailed insights into library operations while maintaining zero noise by default.
Basic Configuration
// Enable logging with default settings (info level)
const client = createLLM({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
logging: true,
});
// Or disable logging explicitly
const client = createLLM({
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-sonnet-20240229',
logging: false, // Default behavior - no logs
});Advanced Configuration
import { createLLM } from 'homogenaize';
const client = createLLM({
provider: 'gemini',
apiKey: process.env.GEMINI_API_KEY!,
model: 'gemini-2.5-flash',
logging: {
level: 'debug', // error, warn, info, debug, verbose, silent
format: 'json', // json or pretty (default: pretty)
prefix: '[MyApp]', // Optional prefix for all log messages
},
});
// Example log output (pretty format):
// 2024-01-15T10:30:45.123Z [info]: [MyApp] Creating LLM client {"provider":"gemini","model":"gemini-1.5-pro"}
// 2024-01-15T10:30:45.456Z [debug]: [MyApp] Transformed request for Gemini API {"contentCount":1,"hasTools":false}Environment Variables
Configure logging globally using environment variables:
# Set log level
export HOMOGENAIZE_LOG_LEVEL=debug
# Set output format
export HOMOGENAIZE_LOG_FORMAT=json
# Run your application
node app.jsLog Levels
- error: API failures, network errors, validation failures
- warn: Rate limit warnings, deprecated features, recoverable errors
- info: Request/response summaries, token usage, model selection
- debug: Request transformation, schema validation, retry attempts
- verbose: Full request/response bodies, detailed transformations
- silent: No logging (default)
Custom Transports
For production environments, you can configure custom Winston transports:
import winston from 'winston';
import { createLLM } from 'homogenaize';
const client = createLLM({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
logging: {
level: 'info',
format: 'json',
transports: [
new winston.transports.File({
filename: 'llm-errors.log',
level: 'error',
}),
new winston.transports.File({
filename: 'llm-combined.log',
}),
new winston.transports.Console({
format: winston.format.simple(),
}),
],
},
});Security Features
The logging system automatically redacts sensitive information:
- API keys (OpenAI, Anthropic, Gemini formats)
- Tokens and secrets
- Password fields
- Any field with 'key', 'token', 'secret', or 'password' in the name
Example:
// This will be logged as:
// API Key: ***REDACTED***
// Instead of showing the actual keyWhat Gets Logged
Provider Operations:
- Request initiation with model and provider info
- Response completion with token usage
- API errors with status codes and retry information
- Streaming events and completion
Client Operations:
- Client creation with configuration
- Tool definitions and executions
- Request routing and transformations
Retry Logic:
- Retry attempts with backoff calculations
- Rate limit handling
- Final success or failure
Building Abstractions
The library exports option types for all client methods, making it easy to build abstractions:
import { ChatOptions, StreamOptions, LLMClient } from 'homogenaize';
// Create reusable chat functions with proper typing
async function chatWithRetry<T>(
client: LLMClient<'openai'>,
options: ChatOptions<'openai', T>,
maxRetries = 3,
): Promise<T> {
for (let i = 0; i < maxRetries; i++) {
try {
const response = await client.chat(options);
return response.content;
} catch (error) {
if (i === maxRetries - 1) throw error;
await new Promise((resolve) => setTimeout(resolve, 1000 * (i + 1)));
}
}
throw new Error('Max retries reached');
}
// Build middleware functions
function withLogging<P extends ProviderName>(options: ChatOptions<P>): ChatOptions<P> {
console.log('Chat request:', options);
return options;
}
// Type-safe wrappers for specific use cases
class ConversationManager<P extends ProviderName> {
constructor(private client: LLMClient<P>) {}
async ask(options: Omit<ChatOptions<P>, 'messages'> & { message: string }) {
const chatOptions: ChatOptions<P> = {
...options,
messages: [{ role: 'user', content: options.message }],
};
return this.client.chat(chatOptions);
}
}Available Option Types
ChatOptions<P, T>- Options for the chat methodStreamOptions<P, T>- Options for the stream method (same as ChatOptions)DefineToolOptions<T>- Options for defining toolsExecuteToolsOptions- Array of tool calls to execute
Type Utilities
Homogenaize provides helpful type utilities for working with provider-specific models:
Model Type Guards
Runtime type guards to check if a string is a valid model for a specific provider:
import { isOpenAIModel, isAnthropicModel, isGeminiModel } from 'homogenaize';
const userInput = 'gpt-4o';
if (isOpenAIModel(userInput)) {
// TypeScript knows userInput is OpenaiModel here
const client = createOpenAILLM({
apiKey: process.env.OPENAI_API_KEY!,
model: userInput, // ✅ Type-safe
});
}
// Check Anthropic models
if (isAnthropicModel('claude-sonnet-4-5')) {
// Valid Anthropic model
}
// Check Gemini models
if (isGeminiModel('gemini-2.5-flash')) {
// Valid Gemini model
}ModelsForProvider Type
Extract the model type for a specific provider:
import type { ModelsForProvider } from 'homogenaize';
// Get model type for a specific provider
type OpenAIModels = ModelsForProvider<'openai'>; // OpenaiModel
type AnthropicModels = ModelsForProvider<'anthropic'>; // AnthropicModel
type GeminiModels = ModelsForProvider<'gemini'>; // GeminiModel
// Use in generic functions
function validateModel<P extends 'openai' | 'anthropic' | 'gemini'>(
provider: P,
model: ModelsForProvider<P>,
): boolean {
switch (provider) {
case 'openai':
return isOpenAIModel(model);
case 'anthropic':
return isAnthropicModel(model);
case 'gemini':
return isGeminiModel(model);
}
}
// Usage
validateModel('openai', 'gpt-4o'); // ✅ Type-safe
validateModel('anthropic', 'claude-sonnet-4-5'); // ✅ Type-safe
// validateModel('openai', 'claude-sonnet-4-5'); // ❌ TypeScript errorAvailable Model Arrays
Access the full list of models for each provider:
import { OPENAI_MODELS, ANTHROPIC_MODELS, GEMINI_MODELS } from 'homogenaize';
// All available OpenAI models
console.log(OPENAI_MODELS); // ['gpt-4o', 'gpt-4o-mini', 'gpt-5', ...]
// All available Anthropic models
console.log(ANTHROPIC_MODELS); // ['claude-sonnet-4-5', 'claude-opus-4', ...]
// All available Gemini models
console.log(GEMINI_MODELS); // ['gemini-2.5-flash', 'gemini-2.5-pro', ...]
// Build a model selector UI
function ModelSelector({ provider }: { provider: 'openai' | 'anthropic' | 'gemini' }) {
const models = provider === 'openai'
? OPENAI_MODELS
: provider === 'anthropic'
? ANTHROPIC_MODELS
: GEMINI_MODELS;
return (
<select>
{models.map(model => (
<option key={model} value={model}>{model}</option>
))}
</select>
);
}API Reference
Creating Clients
// Generic client creation (recommended)
createLLM(config: {
provider: 'openai' | 'anthropic' | 'gemini';
apiKey: string;
model: string;
defaultOptions?: {
temperature?: number;
maxTokens?: number;
topP?: number;
frequencyPenalty?: number;
presencePenalty?: number;
};
})
// Provider-specific clients (for better type inference)
createOpenAILLM(config: {
apiKey: string;
model: string;
defaultOptions?: { /* same options */ };
})
createAnthropicLLM(config: { /* same as above */ })
createGeminiLLM(config: { /* same as above */ })Chat Methods
// Basic chat
client.chat(options: {
messages: Message[];
temperature?: number;
maxTokens?: number;
schema?: ZodSchema | JSONSchemaType<T> | JSONSchema;
tools?: Tool[];
toolChoice?: 'auto' | 'required' | 'none';
features?: ProviderSpecificFeatures;
})
// Streaming chat
client.stream(options: { /* same as chat */ })Tool Methods
// Define a tool
client.defineTool(config: {
name: string;
description: string;
schema: ZodSchema;
execute: (params: any) => Promise<any>;
})
// Execute tool calls
client.executeTools(toolCalls: ToolCall[]): Promise<ToolResult[]>
// ToolResult interface
interface ToolResult {
toolCallId: string;
toolName: string;
result: unknown;
error?: string;
}Environment Variables
# Provider API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=AI...Development
# Install dependencies
bun install
# Run tests
bun test
# Run specific test file
bun test src/providers/openai/openai.test.ts
# Run with API keys for integration tests
OPENAI_API_KEY=... ANTHROPIC_API_KEY=... GEMINI_API_KEY=... bun testContributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT
