llm-flow-router
v1.0.2
Published
Tiny Express-like multiplexer for routing prompts to multiple AI providers with middleware
Maintainers
Readme
llm-flow-router
Tiny Express-like multiplexer for routing AI prompts to multiple providers (Grok, OpenAI, Anthropic, Gemini) with middleware support.
Perfect for reliable hybrid AI apps, fallbacks, logging, and rate-limiting in 2026.
Features
- Route prompts like Express:
/gemini,/grok,/openai, etc. - Middleware chain: logger, fallback, custom ones
- Built-in providers with unified chat format
- Fallback routes for zero-downtime
- Lightweight: <10 files, minimal deps
Installation
npm install llm-flow-routerBadges
Author: Divesh Sarkar
Setup
Create .env in your project:
Set env vars: OPENAI_API_KEY, GROK_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEYUsage
import { mux } from 'llm-flow-router';
import 'dotenv/config'; // if not already loaded
const messages = [
{ role: 'user', content: 'Explain quantum computing like I\'m 12.' }
];
async function demo() {
// Fast & cheap: Gemini 2.5 Flash
const gemini = await mux.handle({ path: '/gemini', messages });
console.log('Gemini:', gemini);
// Reliable fallback
const safe = await mux.handle({ path: '/safe', messages });
console.log('Safe fallback:', safe);
}
demo();Built-in Routes
/grok → Grok (xAI)
/openai → OpenAI
/claude → Anthropic Claude
/safe → Fallback: Gemini → Grok → OpenAI
/default → GeminiCustom Middleware Example
mux.use(async (prompt, next) => {
console.log('Custom middleware running');
return next();
});