npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

unified-ai-router

v3.5.1

Published

A unified interface for multiple LLM providers with automatic fallback. This project includes an OpenAI-compatible server and a deployable Telegram bot with a Mini App interface. It supports major providers like OpenAI, Google, Grok, and more, ensuring re

Readme

Unified AI Router

Unified AI Router is a comprehensive toolkit for AI applications, featuring:

  • An OpenAI-compatible server for seamless API integration
  • A unified interface for multiple LLM providers with automatic fallback

It supports all the OpenAI-compatible servers, including major providers like OpenAI, Google, Grok, Litellm, Vllm, Ollama and more, ensuring reliability and flexibility.

🚀 Features

  • Multi-Provider Support: Works with OpenAI, Google, Grok, OpenRouter, Z.ai, Qroq, Cohere, Cerebras, LLM7 and etc
  • Automatic Fallback: If one provider fails for any reason, automatically tries the next
  • Circuit Breaker: Built-in fault tolerance with automatic circuit breaking for each provider to prevent cascading failures
  • OpenAI-Compatible Server: Drop-in replacement for the OpenAI API, enabling easy integration with existing tools and clients
  • Simple API: Easy-to-use interface for all supported providers
  • Streaming and Non-Streaming Support: Handles both streaming and non-streaming responses
  • Tool Calling: Full support for tools in LLM interactions

🛠️ Installation

npm i unified-ai-router
# OR
git clone https://github.com/mlibre/Unified-AI-Router
cd Unified-AI-Router
npm i

📖 Usage

📚 Basic Library Usage

This is the core AIRouter library - a JavaScript class that provides a unified interface for multiple LLM providers.

const AIRouter = require("unified-ai-router");
require("dotenv").config();

const providers = [
  {
    name: "openai",
    apiKey: process.env.OPENAI_API_KEY,
    model: "gpt-4",
    apiUrl: "https://api.openai.com/v1"
  },
  {
    name: "google",
    apiKey: process.env.GEMINI_API_KEY,
    model: "gemini-2.5-pro",
    apiUrl: "https://generativelanguage.googleapis.com/v1beta/openai/"
  }
];

const llm = new AIRouter(providers);

const messages = [
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "Explain quantum computing in simple terms." }
];

const response = await llm.chatCompletion(messages, {
  temperature: 0.7
});

console.log(response);

You can also provide an array of API keys for a single provider definition.

const providers = [
  {
    name: "openai",
    apiKey: [process.env.OPENAI_API_KEY_1, process.env.OPENAI_API_KEY_2],
    model: "gpt-4",
    apiUrl: "https://api.openai.com/v1"
  }
];

🔌 OpenAI-Compatible Server

The OpenAI-compatible server provides a drop-in replacement for the OpenAI API. It routes requests through the unified router with fallback logic, ensuring high availability.
The server uses the provider configurations defined in provider.js file, and requires API keys set in a .env file.

  1. Copy the example environment file:

    cp .env.example .env
  2. Edit .env and add your API keys for the desired providers (see 🔑 API Keys for sources).

  3. Configure your providers in provider.js. Add new provider or modify existing ones with the appropriate name, apiKey, model, and apiUrl for the providers you want to use.

To start the server locally, run:

npm start

The server listens at http://localhost:3000/ and supports the following OpenAI-compatible endpoints:

  • POST /v1/chat/completions - Chat completions (streaming and non-streaming)
  • POST /chat/completions - Chat completions (streaming and non-streaming)
  • GET /v1/models - List available models
  • GET /models - List available models
  • GET /health - Health check
  • GET /v1/providers/status - Check the status of all configured providers

🧪 Testing

The project includes tests for the core library and the OpenAI-compatible server. To run the tests, use the following commands:

# Test chat completion
node tests/chat.js

# Test OpenAI server non-streaming
node tests/openai-server-non-stream.js

# Test OpenAI server streaming
node tests/openai-server-stream.js

# Test tool usage
node tests/tools.js

🌐 Deploying to Render.com

Ensure provider.js is configured with API keys in .env (as above). Push to GitHub, then:

  1. Dashboard:

    • Create Web Service on Render.com, connect repo.
    • Build Command: npm install
    • Start Command: npm start
    • Add env vars (e.g., OPENAI_API_KEY=sk-...).
    • Deploy.
  2. CLI:

    curl -fsSL https://raw.githubusercontent.com/render-oss/cli/refs/heads/main/bin/install.sh | sh
    render login
    render services
    render deploys create srv-d3f7iqmmcj7s73e67feg --commit HEAD --confirm --output text
  3. Verify:

    • Access https://your-service.onrender.com/models.

See Render docs for details.

🔧 Supported Providers

  • OpenAI
  • Google Gemini
  • Grok
  • OpenRouter
  • Z.ai
  • Qroq
  • Cohere
  • Cerebras
  • LLM7
  • Any Other OpenAI Compatible Server

🔑 API Keys

Get your API keys from the following providers:

📁 Project Structure

  • main.js - Core AIRouter library implementing the unified interface and fallback logic
  • provider.js - Configuration for supported AI providers
  • openai-server.js - OpenAI-compatible API server
  • tests/ - Comprehensive tests for the library, server, and tools

📄 License

MIT