@bernierllc/ai-provider-core
v1.2.0
Published
Abstract AI provider interface for unified multi-provider AI integration
Readme
@bernierllc/ai-provider-core
Abstract AI provider interface for unified multi-provider AI integration across OpenAI, Anthropic, and custom AI providers.
Installation
npm install @bernierllc/ai-provider-coreFeatures
- Provider Abstraction - Unified interface for all AI providers
- Multi-Provider Support - Seamlessly switch between providers
- Failover Strategies - Automatic fallback when providers fail
- Cost Management - Token estimation and cost calculation
- Type Safety - Full TypeScript support with strict typing
- Provider Registry - Manage multiple providers centrally
- Provider Comparison - Compare capabilities, pricing, and performance
Quick Start
Implementing a Custom Provider
import {
AIProvider,
AIProviderConfig,
CompletionRequest,
CompletionResponse
} from '@bernierllc/ai-provider-core';
class MyCustomProvider extends AIProvider {
async complete(request: CompletionRequest): Promise<CompletionResponse> {
// Implement your provider logic
return {
success: true,
content: 'AI response',
finishReason: 'stop'
};
}
// Implement other required methods...
}
// Use your provider
const provider = new MyCustomProvider({
providerName: 'my-provider',
apiKey: process.env.API_KEY!,
defaultModel: 'model-name'
});
const response = await provider.complete({
messages: [{ role: 'user', content: 'Hello!' }]
});Using Provider Registry
import { AIProviderRegistry } from '@bernierllc/ai-provider-core';
const registry = new AIProviderRegistry();
// Register multiple providers
registry.register('openai', openaiProvider);
registry.register('anthropic', anthropicProvider);
// Set default
registry.setDefault('openai');
// Get healthy providers
const healthy = await registry.getHealthyProviders();
// Use with failover
const provider = registry.getDefault() || registry.get(healthy[0]);Core Concepts
AIProvider Abstract Class
All provider implementations must extend AIProvider and implement:
complete(request)- Generate text completionstreamComplete(request)- Stream text completiongenerateEmbeddings(request)- Generate embeddingsmoderate(content)- Check content moderationgetAvailableModels()- List available modelscheckHealth()- Health check endpoint
Provider Configuration
interface AIProviderConfig {
providerName: string; // Provider identifier
apiKey: string; // API key
organizationId?: string; // Optional organization ID
baseURL?: string; // Custom API base URL
version?: string; // Provider version
defaultModel?: string; // Default model to use
timeout?: number; // Request timeout (ms)
maxRetries?: number; // Max retry attempts
retryDelay?: number; // Retry delay (ms)
enableLogging?: boolean; // Enable logging
enableMetrics?: boolean; // Enable metrics collection
}API Reference
AIProvider
Methods
complete(request: CompletionRequest): Promise
Generate a text completion.
const response = await provider.complete({
messages: [
{ role: 'system', content: 'You are helpful' },
{ role: 'user', content: 'Explain AI' }
],
maxTokens: 100,
temperature: 0.7
});streamComplete(request: CompletionRequest): AsyncGenerator
Stream a text completion.
for await (const chunk of provider.streamComplete(request)) {
process.stdout.write(chunk.delta);
}generateEmbeddings(request: EmbeddingRequest): Promise
Generate text embeddings.
const response = await provider.generateEmbeddings({
input: 'Text to embed'
});moderate(content: string): Promise
Check content moderation.
const result = await provider.moderate('Content to check');
if (result.flagged) {
console.log('Content flagged:', result.categories);
}getAvailableModels(): Promise<ModelInfo[]>
Get available models.
const models = await provider.getAvailableModels();
models.forEach(model => {
console.log(`${model.name}: ${model.contextWindow} tokens`);
});checkHealth(): Promise
Check provider health.
const health = await provider.checkHealth();
console.log(`Status: ${health.status}, Latency: ${health.latency}ms`);Utility Methods
// Get provider info
provider.getProviderName(); // string
provider.getProviderVersion(); // string
await provider.isAvailable(); // boolean
// Token and cost estimation
provider.estimateTokens(text); // number
provider.estimateCost(request); // CostEstimateAIProviderRegistry
Methods
register(name: string, provider: AIProvider): void
Register a provider.
registry.register('openai', openaiProvider);unregister(name: string): boolean
Unregister a provider.
const removed = registry.unregister('openai');get(name: string): AIProvider | undefined
Get a specific provider.
const provider = registry.get('openai');getDefault(): AIProvider | undefined
Get the default provider.
const provider = registry.getDefault();setDefault(name: string): void
Set the default provider.
registry.setDefault('anthropic');getProviderNames(): string[]
Get all provider names.
const names = registry.getProviderNames();getAllProviders(): AIProvider[]
Get all providers.
const providers = registry.getAllProviders();checkAvailability(name: string): Promise
Check if a provider is available.
const available = await registry.checkAvailability('openai');getHealthyProviders(): Promise<string[]>
Get names of all healthy providers.
const healthy = await registry.getHealthyProviders();getAllHealthStatuses(): Promise<Map<string, boolean>>
Get health status for all providers.
const statuses = await registry.getAllHealthStatuses();
for (const [name, isHealthy] of statuses) {
console.log(`${name}: ${isHealthy ? 'Healthy' : 'Unhealthy'}`);
}Utility Functions
validateProviderConfig(config: AIProviderConfig): ValidationResult
Validate provider configuration.
import { validateProviderConfig } from '@bernierllc/ai-provider-core';
const result = validateProviderConfig(config);
if (!result.isValid) {
console.error('Invalid config:', result.errors);
}compareProviders(provider1: ModelInfo, provider2: ModelInfo): ProviderComparison
Compare two providers.
import { compareProviders } from '@bernierllc/ai-provider-core';
const comparison = compareProviders(gpt4Model, claudeModel);
console.log('Better context window:', comparison.betterContextWindow);
console.log('Common capabilities:', comparison.commonCapabilities);findBestProviderForCapability(providers: ModelInfo[], capability: string): ModelInfo | undefined
Find the best provider for a specific capability.
import { findBestProviderForCapability } from '@bernierllc/ai-provider-core';
const best = findBestProviderForCapability(models, 'vision');
console.log('Best for vision:', best?.name);getCommonCapabilities(providers: ModelInfo[]): string[]
Get capabilities common to all providers.
import { getCommonCapabilities } from '@bernierllc/ai-provider-core';
const common = getCommonCapabilities(models);
console.log('Common capabilities:', common);Type Definitions
CompletionRequest
interface CompletionRequest {
messages: Message[];
model?: string;
maxTokens?: number;
temperature?: number; // 0-2
topP?: number; // 0-1
frequencyPenalty?: number; // -2 to 2
presencePenalty?: number; // -2 to 2
stop?: string[];
stream?: boolean;
user?: string;
metadata?: Record<string, any>;
}Message
interface Message {
role: 'system' | 'user' | 'assistant' | 'function';
content: string;
name?: string;
functionCall?: FunctionCall;
}CompletionResponse
interface CompletionResponse {
success: boolean;
content?: string;
finishReason?: 'stop' | 'length' | 'content_filter' | 'function_call';
usage?: TokenUsage;
model?: string;
metadata?: Record<string, any>;
error?: string;
}ModelInfo
interface ModelInfo {
id: string;
name: string;
contextWindow: number;
maxOutputTokens: number;
pricing?: ModelPricing;
capabilities: string[];
description?: string;
}HealthStatus
interface HealthStatus {
status: 'healthy' | 'degraded' | 'unavailable';
latency?: number;
lastChecked: Date;
details?: Record<string, any>;
}Advanced Usage
Implementing Failover
async function requestWithFailover(
registry: AIProviderRegistry,
request: CompletionRequest
): Promise<CompletionResponse> {
const providers = await registry.getHealthyProviders();
for (const name of providers) {
const provider = registry.get(name);
if (!provider) continue;
try {
const response = await provider.complete(request);
if (response.success) {
return response;
}
} catch (error) {
console.error(`Provider ${name} failed:`, error);
}
}
throw new Error('All providers failed');
}Load Balancing
class LoadBalancedProvider {
private registry: AIProviderRegistry;
private currentIndex = 0;
private healthyProviders: string[] = [];
async refreshHealthy() {
this.healthyProviders = await this.registry.getHealthyProviders();
}
async getNextProvider(): Promise<AIProvider | undefined> {
if (this.healthyProviders.length === 0) {
await this.refreshHealthy();
}
const name = this.healthyProviders[this.currentIndex % this.healthyProviders.length];
this.currentIndex++;
return this.registry.get(name);
}
}Cost Optimization
async function getCheapestProvider(
registry: AIProviderRegistry
): Promise<AIProvider | undefined> {
const providers = await registry.getHealthyProviders();
let cheapest: { provider: AIProvider; avgCost: number } | undefined;
for (const name of providers) {
const provider = registry.get(name);
if (!provider) continue;
const models = await provider.getAvailableModels();
if (models[0]?.pricing) {
const avgCost = (
models[0].pricing.inputPricePerToken +
models[0].pricing.outputPricePerToken
) / 2;
if (!cheapest || avgCost < cheapest.avgCost) {
cheapest = { provider, avgCost };
}
}
}
return cheapest?.provider;
}Examples
See the examples directory for complete working examples:
- custom-provider.ts - Implementing a custom AI provider
- provider-registry.ts - Multi-provider management and failover
- provider-comparison.ts - Comparing providers by capabilities and cost
Integration Status
- Logger: Optional - Use
@bernierllc/loggerfor enhanced logging - Docs-Suite: Ready - Full TypeScript type documentation
- NeverHub: Optional - Can integrate for service discovery
See Also
- @bernierllc/ai-provider-openai - OpenAI provider implementation
- @bernierllc/ai-provider-anthropic - Anthropic provider implementation
- @bernierllc/ai-content-generator - AI content generation service
- @bernierllc/ai-content-reviewer - AI content review service
License
Copyright (c) 2025 Bernier LLC
This file is licensed to the client under a limited-use license. The client may use and modify this code only within the scope of the project it was delivered for. Redistribution or use in other products or commercial offerings is not permitted without written consent from Bernier LLC.
Contributing
This package is part of the BernierLLC tools monorepo. Follow the MECE architecture principles and ensure 90%+ test coverage for all contributions.
