npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llm-router-mcp

v1.0.1

Published

MCP server that routes prompts across Claude, Gemini, and GPT-4o to minimise token cost

Readme

LLM Router MCP

Route prompts intelligently across Claude, Gemini, and GPT-4o — automatically picking the best model for every task while minimising token cost.

TypeScript MCP Node.js License


✨ What is this?

llm-router-mcp is a Model Context Protocol (MCP) server that acts as an intelligent dispatcher for your AI workloads. Instead of hardcoding a single LLM into your workflow, the router analyses the intent of each prompt and automatically selects the most cost-effective and capable model for that specific task type.

Your prompt ──► LLM Router ──► PLANNING  → Gemini
                            ├── CODEGEN  → Claude
                            ├── TESTING  → GPT-4o
                            └── REVIEW   → Claude

🚀 Features

  • Smart intent-based routing — classifies prompts into task categories (planning, codegen, testing, review) and dispatches to the optimal model
  • Three frontier models — integrates Claude (Anthropic), Gemini (Google), and GPT-4o (OpenAI) out of the box
  • Session-aware context caching — maintains conversation context across turns within the same session, enabling multi-turn workflows
  • Explicit tool shortcuts — bypass auto-routing with dedicated plan_workflow and implement_feature tools
  • Mock mode — run the full server locally without API keys for testing and CI pipelines
  • MCP-native — drop it into any MCP-compatible host (Claude Desktop, Cursor, VS Code Continue, etc.)

🗺️ Routing Logic

| Task Category | Trigger Keywords / Tool | Routed To | |---|---|---| | Planning | Architecture, design, system design, strategy | ✦ Gemini | | Code Generation | Write, implement, build, create a function | ✦ Claude | | Testing | Unit tests, Jest, test suite, test cases | ✦ GPT-4o | | Code Review | Review, what's wrong, analyse, critique | ✦ Claude | | plan_workflow tool | Explicit planning requests | ✦ Gemini | | implement_feature tool | Explicit implementation requests | ✦ GPT-4o |


📦 Installation

Prerequisites

  • Node.js 20+
  • API keys for the models you want to use

Clone & install

git clone https://github.com/Devatva24/llm-router.git
cd llm-router
npm install

Build

npm run build

⚙️ Configuration

Set your API keys as environment variables before starting the server:

export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=AIza...
export OPENAI_API_KEY=sk-...

VS Code (Continue)

Add the following to your %USERPROFILE%/.continue/config.json (the cursor-config/ directory in this repo contains ready-made configs):

{
  "mcpServers": {
    "llm-router": {
      "command": "node",
      "args": ["path/to/llm-router/dist/index.js"]
    }
  }
}

Cursor

Follow the same pattern in your Cursor MCP settings, pointing args at the built dist/index.js.


🛠️ Usage

Start the server

npm start

Start in mock mode (no API keys needed)

npm run start:mock

Development mode (TypeScript, no build step)

npm run dev

🧰 Available MCP Tools

route_prompt

Automatically classifies and routes a prompt to the best model.

{
  "prompt": "Write a recursive function to flatten deeply nested objects",
  "session_id": "my-session"
}

plan_workflow

Explicitly routes to Gemini for high-level planning and architecture tasks.

{
  "prompt": "Design a checkout flow for an e-commerce app",
  "session_id": "my-session"
}

implement_feature

Explicitly routes to GPT-4o for feature implementation.

{
  "prompt": "Implement the /api/products CRUD endpoints",
  "session_id": "my-session"
}

clear_context

Clears the cached conversation context for a given session.

{
  "session_id": "my-session"
}

🧪 Running Tests

The test suite spins up the server in mock mode and validates routing decisions end-to-end:

npm test

Expected output:

🧪 LLM Router — mock test suite

  ✅ planning → Gemini
  ✅ codegen → Claude
  ✅ testing → GPT-4o
  ✅ review → Claude
  ✅ explicit plan_workflow
  ✅ explicit implement_feature
  ✅ context cache — 2nd turn
  ✅ clear_context

──────────────────────────────────────────
  8 passed  0 failed (8 total)

📁 Project Structure

llm-router/
├── src/                    # TypeScript source
│   └── index.ts            # MCP server entrypoint & routing logic
├── dist/                   # Compiled JavaScript (after npm run build)
├── cursor-config/          # Ready-made Cursor MCP config
├── %USERPROFILE%/.continue/# Ready-made Continue MCP config
├── test-router.cjs         # End-to-end test suite (mock mode)
├── tsconfig.json
└── package.json

🤝 Contributing

Contributions are welcome! Feel free to open an issue or submit a pull request for:

  • Adding support for additional LLM providers
  • Improving routing classification accuracy
  • Adding streaming response support
  • Writing more test coverage

📄 License

MIT — see LICENSE for details.