unified-ai-router
v3.5.1
Published
A unified interface for multiple LLM providers with automatic fallback. This project includes an OpenAI-compatible server and a deployable Telegram bot with a Mini App interface. It supports major providers like OpenAI, Google, Grok, and more, ensuring re
Maintainers
Readme
Unified AI Router
Unified AI Router is a comprehensive toolkit for AI applications, featuring:
- An OpenAI-compatible server for seamless API integration
- A unified interface for multiple LLM providers with automatic fallback
It supports all the OpenAI-compatible servers, including major providers like OpenAI, Google, Grok, Litellm, Vllm, Ollama and more, ensuring reliability and flexibility.
🚀 Features
- Multi-Provider Support: Works with OpenAI, Google, Grok, OpenRouter, Z.ai, Qroq, Cohere, Cerebras, LLM7 and etc
- Automatic Fallback: If one provider fails for any reason, automatically tries the next
- Circuit Breaker: Built-in fault tolerance with automatic circuit breaking for each provider to prevent cascading failures
- OpenAI-Compatible Server: Drop-in replacement for the OpenAI API, enabling easy integration with existing tools and clients
- Simple API: Easy-to-use interface for all supported providers
- Streaming and Non-Streaming Support: Handles both streaming and non-streaming responses
- Tool Calling: Full support for tools in LLM interactions
🛠️ Installation
npm i unified-ai-router
# OR
git clone https://github.com/mlibre/Unified-AI-Router
cd Unified-AI-Router
npm i📖 Usage
📚 Basic Library Usage
This is the core AIRouter library - a JavaScript class that provides a unified interface for multiple LLM providers.
const AIRouter = require("unified-ai-router");
require("dotenv").config();
const providers = [
{
name: "openai",
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4",
apiUrl: "https://api.openai.com/v1"
},
{
name: "google",
apiKey: process.env.GEMINI_API_KEY,
model: "gemini-2.5-pro",
apiUrl: "https://generativelanguage.googleapis.com/v1beta/openai/"
}
];
const llm = new AIRouter(providers);
const messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain quantum computing in simple terms." }
];
const response = await llm.chatCompletion(messages, {
temperature: 0.7
});
console.log(response);You can also provide an array of API keys for a single provider definition.
const providers = [
{
name: "openai",
apiKey: [process.env.OPENAI_API_KEY_1, process.env.OPENAI_API_KEY_2],
model: "gpt-4",
apiUrl: "https://api.openai.com/v1"
}
];🔌 OpenAI-Compatible Server
The OpenAI-compatible server provides a drop-in replacement for the OpenAI API. It routes requests through the unified router with fallback logic, ensuring high availability.
The server uses the provider configurations defined in provider.js file, and requires API keys set in a .env file.
Copy the example environment file:
cp .env.example .envEdit
.envand add your API keys for the desired providers (see 🔑 API Keys for sources).Configure your providers in
provider.js. Add new provider or modify existing ones with the appropriatename,apiKey,model, andapiUrlfor the providers you want to use.
To start the server locally, run:
npm startThe server listens at http://localhost:3000/ and supports the following OpenAI-compatible endpoints:
POST /v1/chat/completions- Chat completions (streaming and non-streaming)POST /chat/completions- Chat completions (streaming and non-streaming)GET /v1/models- List available modelsGET /models- List available modelsGET /health- Health checkGET /v1/providers/status- Check the status of all configured providers
🧪 Testing
The project includes tests for the core library and the OpenAI-compatible server. To run the tests, use the following commands:
# Test chat completion
node tests/chat.js
# Test OpenAI server non-streaming
node tests/openai-server-non-stream.js
# Test OpenAI server streaming
node tests/openai-server-stream.js
# Test tool usage
node tests/tools.js🌐 Deploying to Render.com
Ensure provider.js is configured with API keys in .env (as above). Push to GitHub, then:
Dashboard:
- Create Web Service on Render.com, connect repo.
- Build Command:
npm install - Start Command:
npm start - Add env vars (e.g.,
OPENAI_API_KEY=sk-...). - Deploy.
CLI:
curl -fsSL https://raw.githubusercontent.com/render-oss/cli/refs/heads/main/bin/install.sh | sh render login render services render deploys create srv-d3f7iqmmcj7s73e67feg --commit HEAD --confirm --output textVerify:
- Access
https://your-service.onrender.com/models.
- Access
See Render docs for details.
🔧 Supported Providers
- OpenAI
- Google Gemini
- Grok
- OpenRouter
- Z.ai
- Qroq
- Cohere
- Cerebras
- LLM7
- Any Other OpenAI Compatible Server
🔑 API Keys
Get your API keys from the following providers:
- OpenAI: platform.openai.com/api-keys
- Google Gemini: aistudio.google.com/app/apikey
- Grok: console.x.ai
- OpenRouter: openrouter.ai/keys
- Z.ai: api.z.ai
- Qroq: console.groq.com/keys
- Cohere: dashboard.cohere.com/api-keys
- Cerebras: cloud.cerebras.ai
- LLM7: token.llm7.io
📁 Project Structure
main.js- Core AIRouter library implementing the unified interface and fallback logicprovider.js- Configuration for supported AI providersopenai-server.js- OpenAI-compatible API servertests/- Comprehensive tests for the library, server, and tools
📄 License
MIT
