npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

unified-ai

v1.3.4

Published

A unified interface for interacting with multiple AI providers (OpenAI, Claude, Google Gemini) with consistent message handling and tool calling support

Readme

Unified AI

A unified interface for interacting with multiple AI providers (OpenAI, Claude, and Google Gemini) with consistent message handling, tool calling support, and cross-provider compatibility.

Features

  • 🔄 Unified API - Single interface works across OpenAI, Claude, and Google Gemini
  • 🛠️ Tool Calling - Consistent tool/function calling support with Zod schema validation
  • 📝 Type-Safe - Full TypeScript support with comprehensive type definitions
  • 🖼️ Multimodal - Support for text, images, audio, and video inputs
  • 🔁 Tool Roundtrips - Automatic handling of multi-turn tool calling conversations
  • Rate Limiting - Built-in rate limit handling for Claude API
  • 🎯 Stop Signals - Graceful interruption of generation processes

Installation

npm install unified-ai

You'll also need to install the SDK for the provider(s) you plan to use:

# For OpenAI
npm install openai

# For Claude/Anthropic
npm install @anthropic-ai/sdk

# For Google Gemini
npm install @google/genai

Quick Start

OpenAI

import { createOpenAIProvider, generateText } from 'unified-ai';

const createModel = createOpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY
});

const model = createModel('gpt-4.1');

const result = await generateText({
  model,
  messages: [
    { role: 'system', text: 'You are a helpful assistant.' },
    { role: 'user', text: 'Hello! How are you?' }
  ]
});

console.log(result.text);

Claude

import { createClaudeProvider, generateText } from 'unified-ai';

const createModel = createClaudeProvider({
  apiKey: process.env.ANTHROPIC_API_KEY
});

const model = createModel('claude-3-5-sonnet-20241022', 4096);

const result = await generateText({
  model,
  messages: [
    { role: 'system', text: 'You are a helpful assistant.' },
    { role: 'user', text: 'Hello! How are you?' }
  ]
});

console.log(result.text);

Google Gemini

import { createGoogleProvider, generateText } from 'unified-ai';

const createModel = createGoogleProvider(process.env.GOOGLE_API_KEY!);

const model = createModel('gemini-2.5-flash');

const result = await generateText({
  model,
  messages: [
    { role: 'system', text: 'You are a helpful assistant.' },
    { role: 'user', text: 'Hello! How are you?' }
  ]
});

console.log(result.text);

Tool Calling

One of the most powerful features is consistent tool calling across all providers:

import { z } from 'zod';
import { createOpenAIProvider, generateText, BaseTool } from 'unified-ai';

const createModel = createOpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY
});

const model = createModel('gpt-4.1');

// Define tools with Zod schemas
const tools: Record<string, BaseTool> = {
  get_weather: {
    description: 'Get the current weather for a location',
    parameters: z.object({
      location: z.string().describe('City name or address'),
      units: z.enum(['celsius', 'fahrenheit']).default('celsius')
    }),
    execute: async (args) => {
      // Your implementation here
      return {
        temperature: 22,
        conditions: 'sunny',
        location: args.location
      };
    }
  },
  search_web: {
    description: 'Search the web for information',
    parameters: z.object({
      query: z.string().describe('Search query')
    }),
    execute: async (args) => {
      // Your implementation here
      return {
        results: ['Result 1', 'Result 2']
      };
    }
  }
};

const result = await generateText({
  model,
  messages: [
    { role: 'user', text: 'What\'s the weather in London?' }
  ],
  tools,
  maxToolRoundtrips: 5
});

console.log(result.addedMessages);

Multimodal Support

Images

import { generateText } from 'unified-ai';
import fs from 'fs';

const imageBase64 = fs.readFileSync('image.jpg', 'base64');

const result = await generateText({
  model,
  messages: [
    {
      role: 'user',
      content: [
        {
          type: 'image_url',
          image_url: {
            url: `data:image/jpeg;base64,${imageBase64}`,
            detail: 'high'
          }
        }
      ]
    },
    { role: 'user', text: 'What do you see in this image?' }
  ]
});

Audio (Google Gemini)

const result = await generateText({
  model,
  messages: [
    {
      role: 'user',
      content: [
        {
          type: 'audio_url',
          audio_url: {
            mime_type: 'audio/mp3',
            data: audioBase64
          }
        }
      ]
    }
  ]
});

Video (Google Gemini)

import fs from 'fs';

// Read video file and convert to base64
const videoBuffer = fs.readFileSync('path/to/video.mp4');
const videoBase64 = videoBuffer.toString('base64');

const result = await generateText({
  model,
  messages: [
    {
      role: 'user',
      content: [
        {
          type: 'video_url',
          video_url: {
            mime_type: 'video/mp4',
            data: videoBase64
          }
        }
      ]
    },
    { role: 'user', text: 'What is happening in this video?' }
  ]
});

Supported video formats:

  • video/mp4
  • video/mpeg
  • video/mov
  • video/avi
  • video/x-flv
  • video/mpg
  • video/webm
  • video/wmv
  • video/3gpp

Note: Video analysis requires a Gemini model that supports video input (e.g., gemini-1.5-pro, gemini-2.5-pro, gemini-2.5-flash).

Advanced Features

Stop Signals

Gracefully interrupt long-running generations:

import { Signal, generateText } from 'unified-ai';

const stopSignal = new Signal();

// Start generation
const promise = generateText({
  model,
  messages: [{ role: 'user', text: 'Write a very long story...' }],
  stopSignal
});

// Later, to stop:
stopSignal.set();

await promise;

Text Streaming Callback

Get notified as text is generated:

const result = await generateText({
  model,
  messages: [{ role: 'user', text: 'Hello!' }],
  textMessageGenerated: async (message) => {
    console.log('Assistant:', message.text);
  }
});

Tool Force Stop

Tools can force the conversation to stop:

const tools = {
  emergency_stop: {
    description: 'Stop the conversation immediately',
    parameters: z.object({}),
    execute: async (args, options) => {
      options.forceStop = true;
      return { stopped: true };
    }
  }
};

Message Types

The library supports various message types:

// System message
{ role: 'system', text: 'You are a helpful assistant.' }

// User text message
{ role: 'user', text: 'Hello!' }

// Assistant text message
{ role: 'assistant', text: 'Hi there!' }

// Image message
{
  role: 'user',
  content: [{
    type: 'image_url',
    image_url: { url: 'data:image/jpeg;base64,...' }
  }]
}

// Audio message (Google only)
{
  role: 'user',
  content: [{
    type: 'audio_url',
    audio_url: { mime_type: 'audio/mp3', data: '...' }
  }]
}

// Video message (Google only)
{
  role: 'user',
  content: [{
    type: 'video_url',
    video_url: { mime_type: 'video/mp4', data: '...' }
  }]
}

// Tool call (generated by AI)
{
  role: 'assistant',
  functionCall: {
    id: 'call_123',
    name: 'get_weather',
    args: { location: 'London' }
  }
}

// Tool response (your code)
{
  role: 'function',
  functionResponse: {
    id: 'call_123',
    name: 'get_weather',
    response: { temperature: 22 }
  }
}

API Reference

Provider Creation

createOpenAIProvider(options)

const createModel = createOpenAIProvider({
  apiKey: string,      // Optional, defaults to OPENAI_API_KEY env var
  baseURL?: string     // Optional, for custom endpoints
});

const model = createModel(modelId: string);

createClaudeProvider(options)

const createModel = createClaudeProvider({
  apiKey?: string,     // Optional, defaults to ANTHROPIC_API_KEY env var
  baseURL?: string     // Optional, for custom endpoints
});

const model = createModel(
  modelId: string,
  maxTokens?: number   // Max tokens for response (default: 4096)
);

createGoogleProvider(apiKey)

const createModel = createGoogleProvider(apiKey: string);

const model = createModel(
  modelId: string,
  safetySettings?: GoogleSafetySettings
);

generateText(options)

Main function for text generation:

interface GenerateTextOptions {
  model: BaseModel;                    // The model to use
  messages?: BaseMessage[];            // Conversation history
  maxToolRoundtrips?: number;          // Max tool calling rounds (default: 5)
  tools?: Record<string, BaseTool>;    // Available tools
  toolChoice?: 'auto' | 'none' | 'required';  // Tool calling mode
  thinking?: boolean;                  // Enable thinking mode (if supported)
  stopSignal?: Signal;                 // Signal to stop generation
  textMessageGenerated?: (message: BaseTextMessage) => Promise<void>;
}

interface generateTextReturn {
  addedMessages: BaseMessage[];        // All messages added during generation
  text: string;                        // Final text response
}

BaseTool

Tool definition:

interface BaseTool {
  description: string;                 // Tool description
  parameters: z.AnyZodObject;          // Zod schema for parameters
  execute: (args: any, options: BaseToolOptions) => any;
}

interface BaseToolOptions {
  forceStop?: boolean;                 // Set to true to stop after this tool
}

Signal

Control signal for stopping generation:

class Signal {
  set(): void;                         // Set the signal
  clear(): void;                       // Clear the signal
  isSet(): boolean;                    // Check if signal is set
  waitUntilReset(): Promise<void>;     // Wait for signal to clear
}

Rate Limiting

The Claude provider includes built-in rate limit handling:

  • Automatically tracks token usage
  • Proactively waits when approaching limits
  • Retries with exponential backoff on 429 errors
  • Extracts rate limit info from API response headers

Best Practices

  1. Use environment variables for API keys
  2. Set appropriate maxTokens to avoid excessive costs
  3. Define clear tool descriptions for better AI understanding
  4. Use Zod schemas to validate tool parameters
  5. Implement proper error handling around API calls
  6. Use stopSignals for long-running operations
  7. Monitor tool roundtrips to prevent infinite loops

Error Handling

try {
  const result = await generateText({
    model,
    messages: [{ role: 'user', text: 'Hello!' }]
  });
} catch (error) {
  console.error('Generation failed:', error);
}

License

MIT

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Support

For issues, questions, or feature requests, please open an issue on GitHub.