npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

unify-llm

v3.1.1

Published

A unified LLM SDK supporting Gemini and OpenAI models

Readme

Unify LLM

A unified TypeScript SDK for interacting with multiple Large Language Model providers including OpenAI and Google Gemini. This SDK provides a consistent interface across different providers, making it easy to switch between models or implement fallback strategies.

Features

  • 🚀 Unified Interface: Single API for both OpenAI and Gemini models
  • 🔄 Automatic Provider Detection: Automatically detects the appropriate provider based on model name
  • 📡 Streaming Support: Real-time streaming responses for both providers
  • 🛡️ Error Handling: Robust error handling with retry mechanisms
  • 🔧 Flexible Configuration: Customizable timeouts, retries, and provider settings
  • 📊 Model Information: Easy access to model capabilities and limits
  • 🎯 TypeScript Support: Full TypeScript support with comprehensive type definitions
  • 🆕 Latest Gemini SDK: Updated to use the latest @google/genai SDK with Gemini 2.0 models

Installation

npm install unify-llm

Quick Start

import { UnifyLLM } from 'unify-llm';

// Initialize with your API keys
const unify = new UnifyLLM({
  openai: {
    apiKey: process.env.OPENAI_API_KEY!,
  },
  gemini: {
    apiKey: process.env.GEMINI_API_KEY!,
  },
  defaultProvider: 'openai',
});

// Simple chat completion
const response = await unify.chatCompletion({
  messages: [
    { role: 'user', content: 'Hello! How are you?' }
  ],
});

console.log(response.choices[0].message.content);

Configuration

Basic Configuration

const unify = new UnifyLLM({
  openai: {
    apiKey: 'your-openai-api-key',
  },
  gemini: {
    apiKey: 'your-gemini-api-key',
  },
  defaultProvider: 'openai', // or 'gemini'
  defaultModel: 'gpt-3.5-turbo', // or 'gemini-2.0-flash-001'
});

Advanced Configuration

const unify = new UnifyLLM({
  openai: {
    apiKey: 'your-openai-api-key',
    baseUrl: 'https://api.openai.com/v1', // Optional custom base URL
    timeout: 60000, // 60 seconds
    maxRetries: 3,
  },
  gemini: {
    apiKey: 'your-gemini-api-key',
    timeout: 45000, // 45 seconds
    maxRetries: 2,
  },
  defaultProvider: 'openai',
  defaultModel: 'gpt-4',
});

API Reference

Chat Completion

Basic Usage

const response = await unify.chatCompletion({
  messages: [
    { role: 'user', content: 'What is the capital of France?' }
  ],
});

With Custom Parameters

const response = await unify.chatCompletion({
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain quantum computing.' }
  ],
  model: 'gpt-4', // or 'gemini-2.0-flash-001'
  temperature: 0.7,
  maxTokens: 1000,
  topP: 0.9,
  frequencyPenalty: 0.1,
  presencePenalty: 0.1,
});

Streaming

await unify.streamChatCompletion(
  {
    messages: [
      { role: 'user', content: 'Write a story about a robot.' }
    ],
    model: 'gpt-3.5-turbo',
  },
  (chunk) => {
    const content = chunk.choices[0]?.delta?.content;
    if (content) {
      process.stdout.write(content);
    }
  }
);

Model Management

List Available Models

// List all models from all providers
const allModels = await unify.listModels();

// List models from a specific provider
const openaiModels = await unify.listModels('openai');
const geminiModels = await unify.listModels('gemini');

Get Model Information

const modelInfo = await unify.getModelInfo('gpt-4');
if (modelInfo) {
  console.log('Model:', modelInfo.name);
  console.log('Provider:', modelInfo.provider);
  console.log('Max Tokens:', modelInfo.maxTokens);
  console.log('Supports Streaming:', modelInfo.supportsStreaming);
}

Provider Management

// Check if a provider is configured
const hasOpenAI = unify.isProviderConfigured('openai');
const hasGemini = unify.isProviderConfigured('gemini');

// Get a specific provider instance
const openaiProvider = unify.getProvider('openai');
const geminiProvider = unify.getProvider('gemini');

Supported Models

OpenAI Models

  • gpt-4
  • gpt-4-32k
  • gpt-4-turbo
  • gpt-4-turbo-preview
  • gpt-3.5-turbo
  • gpt-3.5-turbo-16k

Gemini Models

  • gemini-pro
  • gemini-pro-vision
  • gemini-1.5-pro
  • gemini-1.5-flash

Error Handling

The SDK includes robust error handling with automatic retries and exponential backoff:

try {
  const response = await unify.chatCompletion({
    messages: [{ role: 'user', content: 'Hello!' }],
  });
} catch (error) {
  if (error.message.includes('rate limit')) {
    console.log('Rate limit exceeded, retrying...');
  } else if (error.message.includes('authentication')) {
    console.log('Invalid API key');
  } else {
    console.log('Unexpected error:', error.message);
  }
}

Advanced Usage Examples

Multi-turn Conversations

const conversation = [
  { role: 'system', content: 'You are a helpful coding assistant.' },
  { role: 'user', content: 'What is TypeScript?' },
];

let response = await unify.chatCompletion({ messages: conversation });
conversation.push(response.choices[0].message);
conversation.push({ role: 'user', content: 'How does it compare to JavaScript?' });

response = await unify.chatCompletion({ messages: conversation });

Provider Fallback Strategy

async function getResponseWithFallback(prompt: string) {
  try {
    // Try GPT-4 first
    return await unify.chatCompletion({
      messages: [{ role: 'user', content: prompt }],
      model: 'gpt-4',
    });
  } catch (error) {
    console.log('GPT-4 failed, trying Gemini...');
    // Fallback to Gemini
    return await unify.chatCompletion({
      messages: [{ role: 'user', content: prompt }],
      model: 'gemini-2.0-flash-001',
    });
  }
}

Available Gemini Models

The SDK now supports the latest Gemini models including:

  • gemini-2.0-flash-001: Latest Gemini 2.0 Flash model (1M tokens)
  • gemini-2.0-flash-exp: Experimental Gemini 2.0 Flash model
  • gemini-1.5-pro: Gemini 1.5 Pro model (1M tokens)
  • gemini-1.5-flash: Gemini 1.5 Flash model (1M tokens)
  • gemini-pro: Original Gemini Pro model (32K tokens)
  • gemini-pro-vision: Gemini Pro Vision model (32K tokens)
// Use the latest Gemini 2.0 model
const response = await unify.chatCompletion({
  messages: [{ role: 'user', content: 'Explain quantum computing' }],
  model: 'gemini-2.0-flash-001',
  maxTokens: 1000,
});

Batch Processing

const questions = [
  'What is machine learning?',
  'Explain neural networks',
  'What is deep learning?',
];

const results = await Promise.all(
  questions.map(question =>
    unify.chatCompletion({
      messages: [{ role: 'user', content: question }],
      model: 'gpt-3.5-turbo',
    })
  )
);

TypeScript Types

The SDK provides comprehensive TypeScript types:

import type {
  Message,
  ChatCompletionRequest,
  ChatCompletionResponse,
  ModelInfo,
  UnifyConfig,
  ModelProvider,
} from 'unify-llm';

// Use types in your code
const messages: Message[] = [
  { role: 'user', content: 'Hello!' }
];

const config: UnifyConfig = {
  openai: { apiKey: 'your-key' },
  defaultProvider: 'openai',
};

Development

Building from Source

git clone <repository-url>
cd unify-llm
npm install
npm run build

Running Tests

npm test

Running Examples

# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export GEMINI_API_KEY="your-gemini-key"

# Run basic example
npx ts-node examples/basic-usage.ts

# Run advanced example
npx ts-node examples/advanced-usage.ts

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests for new functionality
  5. Submit a pull request

License

MIT License - see LICENSE file for details.

Support

For issues and questions, please open an issue on GitHub or contact the maintainers.