npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@bernierllc/ai-provider-openai

v1.0.4

Published

OpenAI API adapter implementing the unified AI provider interface

Downloads

168

Readme

@bernierllc/ai-provider-openai

OpenAI API adapter implementing the unified AI provider interface for seamless integration across BernierLLC projects.

Installation

npm install @bernierllc/ai-provider-openai
# or
pnpm add @bernierllc/ai-provider-openai

Features

  • Complete OpenAI API Support: GPT-4, GPT-3.5, embeddings, moderation, vision
  • Streaming Completions: Real-time text generation with async generators
  • Function Calling: Support for OpenAI function calling capabilities
  • Vision Analysis: GPT-4 Vision for image understanding
  • Cost Estimation: Accurate token and cost estimation before requests
  • Type Safety: Full TypeScript support with strict typing
  • Error Handling: Comprehensive error handling with retry logic
  • Health Monitoring: API health checks and availability detection

Usage

Basic Completion

import { OpenAIProvider } from '@bernierllc/ai-provider-openai';

const provider = new OpenAIProvider({
  providerName: 'openai',
  apiKey: process.env.OPENAI_API_KEY!,
  defaultModel: 'gpt-4-turbo',
  timeout: 30000,
  maxRetries: 3
});

// Generate completion
const response = await provider.complete({
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain TypeScript generics in simple terms.' }
  ],
  maxTokens: 500,
  temperature: 0.7
});

if (response.success) {
  console.log(response.content);
  console.log(`Tokens used: ${response.usage?.totalTokens}`);
}

Streaming Completion

console.log('Generating response...\n');

for await (const chunk of provider.streamComplete({
  messages: [
    { role: 'user', content: 'Write a short poem about coding' }
  ]
})) {
  process.stdout.write(chunk.delta);

  if (chunk.finishReason) {
    console.log(`\n\nFinished: ${chunk.finishReason}`);
    if (chunk.usage) {
      console.log(`Tokens: ${chunk.usage.totalTokens}`);
    }
  }
}

Function Calling

const response = await provider.completionWithFunctions({
  messages: [
    { role: 'user', content: 'What is the weather in San Francisco?' }
  ],
  functions: [
    {
      name: 'get_weather',
      description: 'Get the current weather for a location',
      parameters: {
        type: 'object',
        properties: {
          location: {
            type: 'string',
            description: 'The city and state, e.g. San Francisco, CA'
          },
          unit: {
            type: 'string',
            enum: ['celsius', 'fahrenheit']
          }
        },
        required: ['location']
      }
    }
  ]
});

if (response.metadata?.functionCall) {
  console.log('Function call:', response.metadata.functionCall);
  // { name: 'get_weather', arguments: '{"location":"San Francisco, CA"}' }
}

Vision Analysis

const analysis = await provider.analyzeImage(
  'https://example.com/image.jpg',
  'What objects do you see in this image?',
  'gpt-4-vision-preview'
);

if (analysis.success) {
  console.log(analysis.content);
}

Embeddings

const embeddings = await provider.generateEmbeddings({
  input: [
    'TypeScript is a typed superset of JavaScript',
    'Python is a high-level programming language'
  ],
  model: 'text-embedding-3-small'
});

if (embeddings.success) {
  console.log(`Generated ${embeddings.embeddings?.length} embeddings`);
  console.log(`Dimensions: ${embeddings.embeddings?.[0].length}`);
}

Content Moderation

const moderation = await provider.moderate(
  'Some content to check for policy violations'
);

if (moderation.success) {
  console.log(`Flagged: ${moderation.flagged}`);
  if (moderation.flagged) {
    console.log('Violated categories:', moderation.categories);
  }
}

Cost Estimation

const request = {
  messages: [
    { role: 'user', content: 'Write a detailed article about TypeScript' }
  ],
  maxTokens: 2000
};

const cost = provider.estimateCost(request);
console.log(`Estimated cost: $${cost.estimatedCostUSD.toFixed(4)}`);
console.log(`Input tokens: ${cost.inputTokens}`);
console.log(`Output tokens: ${cost.outputTokens}`);

API

Constructor

new OpenAIProvider(config: OpenAIProviderConfig)

Configuration options:

  • providerName: Must be 'openai'
  • apiKey: OpenAI API key (required)
  • defaultModel: Default model to use (optional, defaults to 'gpt-4-turbo')
  • organizationId: OpenAI organization ID (optional)
  • baseURL: Custom API base URL (optional)
  • timeout: Request timeout in milliseconds (optional, default: 60000)
  • maxRetries: Maximum retry attempts (optional, default: 3)

Methods

Core Methods (Implements AIProvider interface)

  • complete(request: CompletionRequest): Promise<CompletionResponse> - Generate text completion
  • streamComplete(request: CompletionRequest): AsyncGenerator<StreamChunk> - Stream completion chunks
  • generateEmbeddings(request: EmbeddingRequest): Promise<EmbeddingResponse> - Generate embeddings
  • moderate(content: string): Promise<ModerationResponse> - Check content moderation
  • getAvailableModels(): Promise<ModelInfo[]> - List available models
  • checkHealth(): Promise<HealthStatus> - Check API health
  • estimateCost(request: CompletionRequest): CostEstimate - Estimate request cost

OpenAI-Specific Methods

  • completionWithFunctions(request & { functions }): Promise<CompletionResponse> - Chat completion with function calling
  • analyzeImage(imageUrl: string, prompt: string, model?: string): Promise<CompletionResponse> - Analyze image with GPT-4 Vision

Available Models

Chat Models

  • gpt-4-turbo - 128K context, latest GPT-4 with improved performance
  • gpt-4 - 8K context, powerful reasoning and understanding
  • gpt-4-32k - 32K context, extended context window
  • gpt-3.5-turbo - 16K context, fast and cost-effective
  • gpt-4-vision-preview - GPT-4 with vision capabilities

Embedding Models

  • text-embedding-3-small - 1536 dimensions, cost-effective embeddings
  • text-embedding-3-large - 3072 dimensions, higher quality embeddings
  • text-embedding-ada-002 - 1536 dimensions, legacy embedding model

Pricing

Pricing is automatically handled based on the model used:

  • GPT-4 Turbo: $0.01/1K input tokens, $0.03/1K output tokens
  • GPT-4: $0.03/1K input tokens, $0.06/1K output tokens
  • GPT-3.5 Turbo: $0.0005/1K input tokens, $0.0015/1K output tokens
  • Embeddings (3-small): $0.00002/1K tokens
  • Embeddings (3-large): $0.00013/1K tokens

Error Handling

The package provides comprehensive error handling:

try {
  const response = await provider.complete({ messages: [...] });
  if (!response.success) {
    console.error('Error:', response.error);
  }
} catch (error) {
  // Handles network errors, timeouts, etc.
  console.error('Request failed:', error);
}

Error codes:

  • INVALID_REQUEST - Invalid request parameters
  • AUTHENTICATION_ERROR - Invalid API key
  • PERMISSION_DENIED - Insufficient permissions
  • NOT_FOUND - Model or resource not found
  • RATE_LIMIT_ERROR - Rate limit exceeded (retryable)
  • SERVER_ERROR - OpenAI server error (retryable)
  • TIMEOUT_ERROR - Request timeout (retryable)

Integration Status

  • Logger: Required - Uses @bernierllc/logger for operation logging
  • Docs-Suite: Ready - Full API documentation available
  • NeverHub: Optional - Service discovery and event publishing supported

Development

# Install dependencies
pnpm install

# Build package
pnpm run build

# Run tests
pnpm test

# Run tests with coverage
pnpm run test:coverage

# Lint code
pnpm run lint

License

Copyright (c) 2025 Bernier LLC. All rights reserved.

This package is part of the BernierLLC tools monorepo and follows the unified AI provider interface pattern for seamless provider switching and integration.

See Also