openrouter-mcp-server
v1.2.2
Published
MCP server providing access to OpenRouter's unified API for 500+ AI models
Maintainers
Readme
⚡ OpenRouter MCP
Every AI model. One terminal. Zero context switching.
Getting Started · Features · Tools · Development
The Problem
You're in your AI coding assistant. You need a quick GPT-4 opinion. Or a Flux-generated image. Or a side-by-side comparison across three models. That means: leave your editor, open a browser, find the right API, copy-paste keys, lose your flow...
The Fix
Add it to your MCP config and go:
{
"openrouter": {
"command": "npx",
"args": ["-y", "openrouter-mcp-server"],
"env": {
"OPENROUTER_API_KEY": "sk-or-v1-your-key-here"
}
}
}Now every model on OpenRouter is one tool call away. Chat, image gen, model search, cost tracking — all inline.
⚡ Getting Started
Step 1 — Get an API key from openrouter.ai/keys
Step 2 — Add to your MCP config (~/.claude/settings.json or your app's MCP settings):
{
"mcpServers": {
"openrouter": {
"command": "npx",
"args": ["-y", "openrouter-mcp-server"],
"env": {
"OPENROUTER_API_KEY": "sk-or-v1-your-key-here"
}
}
}
}Done. The server starts automatically when your MCP client connects.
git clone https://github.com/overtimepog/OpenrouterMCP.git
cd OpenrouterMCP
bash scripts/setup.sh🎯 Features
📡 MCP Tools
| Parameter | Type | Required | Description |
|:----------|:-----|:---------|:------------|
| model | string | Yes | Model ID (e.g. openai/gpt-4) |
| messages | array | Yes | [{ role, content }] message array |
| session_id | string | | Continue an existing conversation |
| stream | boolean | | Stream response (default: true) |
| temperature | number | | Randomness 0–2 |
| max_tokens | number | | Max tokens to generate |
| tools | array | | OpenAI-compatible function definitions |
| tool_choice | string | | auto / none / required |
| top_p | number | | Nucleus sampling threshold |
| top_k | number | | Top-K sampling |
| frequency_penalty | number | | Frequency penalty (−2 to 2) |
| presence_penalty | number | | Presence penalty (−2 to 2) |
| reasoning | object | | Reasoning tokens ({ effort }) |
| provider | object | | Provider routing preferences |
| models | array | | Fallback model list for auto-routing |
| plugins | array | | OpenRouter plugins (e.g. web search) |
| Parameter | Type | Description |
|:----------|:-----|:------------|
| provider | string | Filter by provider (openai, anthropic, …) |
| keyword | string | Search in model names |
| min_context_length | number | Minimum context window |
| max_context_length | number | Maximum context window |
| modality | string | text, image, audio |
| min_price / max_price | number | Price range per token |
| supports_tools | boolean | Function calling support |
| supports_streaming | boolean | Streaming support |
| supports_temperature | boolean | Temperature parameter support |
| sort_by | string | price, context_length, provider |
| sort_order | string | asc or desc |
| Parameter | Type | Description |
|:----------|:-----|:------------|
| provider | string | Filter by provider |
| keyword | string | Search in model names |
| min_context_length | number | Minimum context window |
| max_context_length | number | Maximum context window |
| modality | string | Filter by modality |
| min_price / max_price | number | Price range |
| Parameter | Type | Required | Description |
|:----------|:-----|:---------|:------------|
| model | string | Yes | Image model ID |
| prompt | string | Yes | Image description |
| aspect_ratio | string | | 1:1, 16:9, 9:16, etc. |
| image_size | string | | 1K, 2K, or 4K |
| save_path | string | | Local save path (.png, .jpg, .webp) |
No parameters. Returns credit limit, remaining balance, total usage, and daily/weekly/monthly breakdowns.
| Parameter | Type | Description |
|:----------|:-----|:------------|
| session_id | string | Costs for a specific session |
| recent_only | boolean | Only show recent entries |
| Parameter | Type | Required | Description |
|:----------|:-----|:---------|:------------|
| generation_id | string | Yes | The generation ID to look up |
Returns tokens, cost, latency, model, and provider info.
| Parameter | Type | Required | Description |
|:----------|:-----|:---------|:------------|
| model_slug | string | Yes | Model slug (e.g. openai/gpt-4) |
Returns all available providers with latency, uptime, pricing, and capabilities.
📁 Project Structure
OpenrouterMCP/
└── src/ TypeScript MCP server
├── index.ts Entry point
├── server/ Server bootstrap
├── api/ OpenRouter client, cache, rate limits
├── session/ Multi-turn conversation management
├── cost/ Cost tracking engine
├── schemas/ Shared Zod schemas
├── tools/ 8 tool implementations
│ ├── chat/
│ ├── searchModels/
│ ├── listModels/
│ ├── imageGeneration/
│ ├── credits/
│ ├── costSummary/
│ ├── generation/
│ └── modelEndpoints/
└── utils/ Logger, model validation🛠 Development
npm install # dependencies
npm run build # compile
npm test # 383 tests
npm run dev # watch mode🔍 Troubleshooting
echo 'export OPENROUTER_API_KEY=sk-or-v1-your-key' >> ~/.zshrc && source ~/.zshrcVerify at openrouter.ai/keys that the key is correct and active.
Model IDs use the format provider/model-name (e.g. openai/gpt-4). Use the openrouter_search_models tool to find current models — never hardcode IDs.
The server warns before you hit limits. Upgrade your OpenRouter plan or space out requests.
- Check key is set:
echo $OPENROUTER_API_KEY - Verify your MCP config has the correct server entry
- Restart your MCP client
- Try
npx -y openrouter-mcp-serverdirectly to check for errors
MIT License
