@bernierllc/ai-provider-openai
v1.0.4
Published
OpenAI API adapter implementing the unified AI provider interface
Downloads
168
Readme
@bernierllc/ai-provider-openai
OpenAI API adapter implementing the unified AI provider interface for seamless integration across BernierLLC projects.
Installation
npm install @bernierllc/ai-provider-openai
# or
pnpm add @bernierllc/ai-provider-openaiFeatures
- Complete OpenAI API Support: GPT-4, GPT-3.5, embeddings, moderation, vision
- Streaming Completions: Real-time text generation with async generators
- Function Calling: Support for OpenAI function calling capabilities
- Vision Analysis: GPT-4 Vision for image understanding
- Cost Estimation: Accurate token and cost estimation before requests
- Type Safety: Full TypeScript support with strict typing
- Error Handling: Comprehensive error handling with retry logic
- Health Monitoring: API health checks and availability detection
Usage
Basic Completion
import { OpenAIProvider } from '@bernierllc/ai-provider-openai';
const provider = new OpenAIProvider({
providerName: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
defaultModel: 'gpt-4-turbo',
timeout: 30000,
maxRetries: 3
});
// Generate completion
const response = await provider.complete({
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain TypeScript generics in simple terms.' }
],
maxTokens: 500,
temperature: 0.7
});
if (response.success) {
console.log(response.content);
console.log(`Tokens used: ${response.usage?.totalTokens}`);
}Streaming Completion
console.log('Generating response...\n');
for await (const chunk of provider.streamComplete({
messages: [
{ role: 'user', content: 'Write a short poem about coding' }
]
})) {
process.stdout.write(chunk.delta);
if (chunk.finishReason) {
console.log(`\n\nFinished: ${chunk.finishReason}`);
if (chunk.usage) {
console.log(`Tokens: ${chunk.usage.totalTokens}`);
}
}
}Function Calling
const response = await provider.completionWithFunctions({
messages: [
{ role: 'user', content: 'What is the weather in San Francisco?' }
],
functions: [
{
name: 'get_weather',
description: 'Get the current weather for a location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA'
},
unit: {
type: 'string',
enum: ['celsius', 'fahrenheit']
}
},
required: ['location']
}
}
]
});
if (response.metadata?.functionCall) {
console.log('Function call:', response.metadata.functionCall);
// { name: 'get_weather', arguments: '{"location":"San Francisco, CA"}' }
}Vision Analysis
const analysis = await provider.analyzeImage(
'https://example.com/image.jpg',
'What objects do you see in this image?',
'gpt-4-vision-preview'
);
if (analysis.success) {
console.log(analysis.content);
}Embeddings
const embeddings = await provider.generateEmbeddings({
input: [
'TypeScript is a typed superset of JavaScript',
'Python is a high-level programming language'
],
model: 'text-embedding-3-small'
});
if (embeddings.success) {
console.log(`Generated ${embeddings.embeddings?.length} embeddings`);
console.log(`Dimensions: ${embeddings.embeddings?.[0].length}`);
}Content Moderation
const moderation = await provider.moderate(
'Some content to check for policy violations'
);
if (moderation.success) {
console.log(`Flagged: ${moderation.flagged}`);
if (moderation.flagged) {
console.log('Violated categories:', moderation.categories);
}
}Cost Estimation
const request = {
messages: [
{ role: 'user', content: 'Write a detailed article about TypeScript' }
],
maxTokens: 2000
};
const cost = provider.estimateCost(request);
console.log(`Estimated cost: $${cost.estimatedCostUSD.toFixed(4)}`);
console.log(`Input tokens: ${cost.inputTokens}`);
console.log(`Output tokens: ${cost.outputTokens}`);API
Constructor
new OpenAIProvider(config: OpenAIProviderConfig)Configuration options:
providerName: Must be'openai'apiKey: OpenAI API key (required)defaultModel: Default model to use (optional, defaults to 'gpt-4-turbo')organizationId: OpenAI organization ID (optional)baseURL: Custom API base URL (optional)timeout: Request timeout in milliseconds (optional, default: 60000)maxRetries: Maximum retry attempts (optional, default: 3)
Methods
Core Methods (Implements AIProvider interface)
complete(request: CompletionRequest): Promise<CompletionResponse>- Generate text completionstreamComplete(request: CompletionRequest): AsyncGenerator<StreamChunk>- Stream completion chunksgenerateEmbeddings(request: EmbeddingRequest): Promise<EmbeddingResponse>- Generate embeddingsmoderate(content: string): Promise<ModerationResponse>- Check content moderationgetAvailableModels(): Promise<ModelInfo[]>- List available modelscheckHealth(): Promise<HealthStatus>- Check API healthestimateCost(request: CompletionRequest): CostEstimate- Estimate request cost
OpenAI-Specific Methods
completionWithFunctions(request & { functions }): Promise<CompletionResponse>- Chat completion with function callinganalyzeImage(imageUrl: string, prompt: string, model?: string): Promise<CompletionResponse>- Analyze image with GPT-4 Vision
Available Models
Chat Models
- gpt-4-turbo - 128K context, latest GPT-4 with improved performance
- gpt-4 - 8K context, powerful reasoning and understanding
- gpt-4-32k - 32K context, extended context window
- gpt-3.5-turbo - 16K context, fast and cost-effective
- gpt-4-vision-preview - GPT-4 with vision capabilities
Embedding Models
- text-embedding-3-small - 1536 dimensions, cost-effective embeddings
- text-embedding-3-large - 3072 dimensions, higher quality embeddings
- text-embedding-ada-002 - 1536 dimensions, legacy embedding model
Pricing
Pricing is automatically handled based on the model used:
- GPT-4 Turbo: $0.01/1K input tokens, $0.03/1K output tokens
- GPT-4: $0.03/1K input tokens, $0.06/1K output tokens
- GPT-3.5 Turbo: $0.0005/1K input tokens, $0.0015/1K output tokens
- Embeddings (3-small): $0.00002/1K tokens
- Embeddings (3-large): $0.00013/1K tokens
Error Handling
The package provides comprehensive error handling:
try {
const response = await provider.complete({ messages: [...] });
if (!response.success) {
console.error('Error:', response.error);
}
} catch (error) {
// Handles network errors, timeouts, etc.
console.error('Request failed:', error);
}Error codes:
INVALID_REQUEST- Invalid request parametersAUTHENTICATION_ERROR- Invalid API keyPERMISSION_DENIED- Insufficient permissionsNOT_FOUND- Model or resource not foundRATE_LIMIT_ERROR- Rate limit exceeded (retryable)SERVER_ERROR- OpenAI server error (retryable)TIMEOUT_ERROR- Request timeout (retryable)
Integration Status
- Logger: Required - Uses @bernierllc/logger for operation logging
- Docs-Suite: Ready - Full API documentation available
- NeverHub: Optional - Service discovery and event publishing supported
Development
# Install dependencies
pnpm install
# Build package
pnpm run build
# Run tests
pnpm test
# Run tests with coverage
pnpm run test:coverage
# Lint code
pnpm run lintLicense
Copyright (c) 2025 Bernier LLC. All rights reserved.
This package is part of the BernierLLC tools monorepo and follows the unified AI provider interface pattern for seamless provider switching and integration.
See Also
- @bernierllc/ai-provider-core - Abstract AI provider interface
- @bernierllc/ai-provider-anthropic - Anthropic Claude provider
- @bernierllc/ai-content-generator - AI-powered content generation service
