npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ccr-next

v1.1.6

Published

Use Claude Code without an Anthropics account and route it to another LLM provider

Downloads

41

Readme

Claude Code Router

A powerful routing proxy that enables Claude Code to work with any LLM provider - no Anthropic account required. Route requests to different models based on context, cost, or custom rules.

✨ Key Features

  • 🔀 Intelligent Model Routing: Automatically route requests to different models based on task type (background tasks, reasoning, long context, web search)
  • 🌐 Universal Provider Support: Works with OpenAI, Azure OpenAI, OpenRouter, DeepSeek, Ollama, Gemini, Volcengine, Alibaba Cloud, and any OpenAI-compatible API
  • 🔧 Request/Response Transformation: Built-in transformers adapt requests for different provider APIs automatically
  • 💱 Dynamic Model Switching: Switch models on-the-fly using /model provider,model command in Claude Code
  • 🤖 GitHub Actions Integration: Run Claude Code in CI/CD pipelines with custom models
  • 🔌 Extensible Plugin System: Create custom transformers and routing logic
  • 🔒 Security Features: API key authentication and host restrictions for secure deployment
  • 📊 Cost Optimization: Route background tasks to cheaper/local models automatically

🚀 Getting Started

1. Installation

First, ensure you have Claude Code installed:

npm install -g @anthropic-ai/claude-code

Then, install Claude Code Router:

npm install -g @musistudio/claude-code-router

2. Configuration

Create your configuration file at ~/.claude-code-router/config.json. See config.example.json for a complete example.

Configuration Options

| Option | Type | Description | Default | |--------|------|-------------|---------| | PROXY_URL | string | HTTP proxy for API requests | - | | LOG | boolean | Enable logging to ~/.claude-code-router.log | false | | APIKEY | string | API key for authentication (Bearer token or x-api-key header) | - | | HOST | string | Server host address (restricted to 127.0.0.1 without APIKEY) | 127.0.0.1 | | API_TIMEOUT_MS | number | API request timeout in milliseconds | 600000 | | Providers | array | Model provider configurations | Required | | Router | object | Routing rules for different scenarios | Required | | CUSTOM_ROUTER_PATH | string | Path to custom routing logic | - |

Example Configuration

{
  "APIKEY": "your-secret-key",
  "PROXY_URL": "http://127.0.0.1:7890",
  "LOG": true,
  "API_TIMEOUT_MS": 600000,
  "Providers": [
    {
      "name": "openrouter",
      "api_base_url": "https://openrouter.ai/api/v1/chat/completions",
      "api_key": "sk-xxx",
      "models": [
        "google/gemini-2.5-pro-preview",
        "anthropic/claude-sonnet-4",
        "anthropic/claude-3.5-sonnet",
        "anthropic/claude-3.7-sonnet:thinking"
      ],
      "transformer": {
        "use": ["openrouter"]
      }
    },
    {
      "name": "deepseek",
      "api_base_url": "https://api.deepseek.com/chat/completions",
      "api_key": "sk-xxx",
      "models": ["deepseek-chat", "deepseek-reasoner"],
      "transformer": {
        "use": ["deepseek"],
        "deepseek-chat": {
          "use": ["tooluse"]
        }
      }
    },
    {
      "name": "ollama",
      "api_base_url": "http://localhost:11434/v1/chat/completions",
      "api_key": "ollama",
      "models": ["qwen2.5-coder:latest"]
    },
    {
      "name": "gemini",
      "api_base_url": "https://generativelanguage.googleapis.com/v1beta/models/",
      "api_key": "sk-xxx",
      "models": ["gemini-2.5-flash", "gemini-2.5-pro"],
      "transformer": {
        "use": ["gemini"]
      }
    },
    {
      "name": "volcengine",
      "api_base_url": "https://ark.cn-beijing.volces.com/api/v3/chat/completions",
      "api_key": "sk-xxx",
      "models": ["deepseek-v3-250324", "deepseek-r1-250528"],
      "transformer": {
        "use": ["deepseek"]
      }
    },
    {
      "name": "modelscope",
      "api_base_url": "https://api-inference.modelscope.cn/v1/chat/completions",
      "api_key": "",
      "models": ["Qwen/Qwen3-Coder-480B-A35B-Instruct", "Qwen/Qwen3-235B-A22B-Thinking-2507"],
      "transformer": {
        "use": [
          [
            "maxtoken",
            {
              "max_tokens": 65536
            }
          ],
          "enhancetool"
        ],
        "Qwen/Qwen3-235B-A22B-Thinking-2507": {
          "use": ["reasoning"]
        }
      }
    },
    {
      "name": "dashscope",
      "api_base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions",
      "api_key": "",
      "models": ["qwen3-coder-plus"],
      "transformer": {
        "use": [
          [
            "maxtoken",
            {
              "max_tokens": 65536
            }
          ],
          "enhancetool"
        ]
      }
    }
  ],
  "Router": {
    "default": "deepseek,deepseek-chat",
    "background": "ollama,qwen2.5-coder:latest",
    "think": "deepseek,deepseek-reasoner",
    "longContext": "openrouter,google/gemini-2.5-pro-preview",
    "longContextThreshold": 60000,
    "webSearch": "gemini,gemini-2.5-flash"
  }
}

3. Usage

Quick Start

# Add OpenAI as a provider (recommended: gpt-4.1 for best performance)
ccr provider add openai https://api.openai.com/v1/chat/completions YOUR-API-KEY gpt-4.1

# Start Claude Code with the router
ccr code

# Check server status
ccr status

# Restart after config changes
ccr restart

# Stop the router
ccr stop

Command Reference

| Command | Description | |---------|-------------| | ccr start | Start the router server | | ccr stop | Stop the router server | | ccr restart | Restart the router server | | ccr status | Check server status | | ccr code [prompt] | Run Claude Code through the router |

Providers

The Providers array is where you define the different model providers you want to use. Each provider object requires:

  • name: A unique name for the provider.
  • api_base_url: The full API endpoint for chat completions.
  • api_key: Your API key for the provider.
  • models: A list of model names available from this provider.
  • transformer (optional): Specifies transformers to process requests and responses.

Transformers

Transformers allow you to modify the request and response payloads to ensure compatibility with different provider APIs.

  • Global Transformer: Apply a transformer to all models from a provider. In this example, the openrouter transformer is applied to all models under the openrouter provider.

    {
      "name": "openrouter",
      "api_base_url": "https://openrouter.ai/api/v1/chat/completions",
      "api_key": "sk-xxx",
      "models": [
        "google/gemini-2.5-pro-preview",
        "anthropic/claude-sonnet-4",
        "anthropic/claude-3.5-sonnet"
      ],
      "transformer": { "use": ["openrouter"] }
    }
  • Model-Specific Transformer: Apply a transformer to a specific model. In this example, the deepseek transformer is applied to all models, and an additional tooluse transformer is applied only to the deepseek-chat model.

    {
      "name": "deepseek",
      "api_base_url": "https://api.deepseek.com/chat/completions",
      "api_key": "sk-xxx",
      "models": ["deepseek-chat", "deepseek-reasoner"],
      "transformer": {
        "use": ["deepseek"],
        "deepseek-chat": { "use": ["tooluse"] }
      }
    }
  • Passing Options to a Transformer: Some transformers, like maxtoken, accept options. To pass options, use a nested array where the first element is the transformer name and the second is an options object.

    {
      "name": "siliconflow",
      "api_base_url": "https://api.siliconflow.cn/v1/chat/completions",
      "api_key": "sk-xxx",
      "models": ["moonshotai/Kimi-K2-Instruct"],
      "transformer": {
        "use": [
          [
            "maxtoken",
            {
              "max_tokens": 16384
            }
          ]
        ]
      }
    }

Available Built-in Transformers:

| Transformer | Description | Use Case | |-------------|-------------|----------| | deepseek | Adapts for DeepSeek API | DeepSeek models | | gemini | Adapts for Gemini API | Google Gemini models | | openrouter | Adapts for OpenRouter API | OpenRouter models | | groq | Adapts for Groq API | Groq-hosted models | | maxtoken | Sets custom max_tokens | Token limit control | | tooluse | Optimizes tool_choice | Tool-capable models | | enhancetool | Enhanced tool handling | Advanced tool usage | | reasoning | Optimizes for reasoning | Thinking/reasoning models | | gemini-cli | Unofficial Gemini support | Experimental |

Note: OpenAI and Azure OpenAI providers don't require transformers as they use the standard OpenAI API format, which is the default format used by the router.

Custom Transformers:

You can also create your own transformers and load them via the transformers field in config.json.

{
  "transformers": [
    {
      "path": "$HOME/.claude-code-router/plugins/gemini-cli.js",
      "options": {
        "project": "xxx"
      }
    }
  ]
}

Router

The Router object defines which model to use for different scenarios:

  • default: The default model for general tasks.
  • background: A model for background tasks. This can be a smaller, local model to save costs.
  • think: A model for reasoning-heavy tasks, like Plan Mode.
  • longContext: A model for handling long contexts (e.g., > 60K tokens).
  • longContextThreshold (optional): The token count threshold for triggering the long context model. Defaults to 60000 if not specified.
  • webSearch: Used for handling web search tasks and this requires the model itself to support the feature. If you're using openrouter, you need to add the :online suffix after the model name.

Dynamic Model Switching:

Switch models on-the-fly in Claude Code:

/model provider_name,model_name

Examples:

  • /model openrouter,anthropic/claude-3.5-sonnet
  • /model deepseek,deepseek-chat
  • /model gemini,gemini-2.5-pro

Custom Router

For more advanced routing logic, you can specify a custom router script via the CUSTOM_ROUTER_PATH in your config.json. This allows you to implement complex routing rules beyond the default scenarios.

In your config.json:

{
  "CUSTOM_ROUTER_PATH": "$HOME/.claude-code-router/custom-router.js"
}

The custom router file must be a JavaScript module that exports an async function. This function receives the request object and the config object as arguments and should return the provider and model name as a string (e.g., "provider_name,model_name"), or null to fall back to the default router.

Here is an example of a custom-router.js based on custom-router.example.js:

// $HOME/.claude-code-router/custom-router.js

/**
 * A custom router function to determine which model to use based on the request.
 *
 * @param {object} req - The request object from Claude Code, containing the request body.
 * @param {object} config - The application's config object.
 * @returns {Promise<string|null>} - A promise that resolves to the "provider,model_name" string, or null to use the default router.
 */
module.exports = async function router(req, config) {
  const userMessage = req.body.messages.find((m) => m.role === "user")?.content;

  if (userMessage && userMessage.includes("explain this code")) {
    // Use a powerful model for code explanation
    return "openrouter,anthropic/claude-3.5-sonnet";
  }

  // Fallback to the default router configuration
  return null;
};

🤖 GitHub Actions Integration

Use Claude Code Router in your CI/CD pipelines to leverage different models for automated tasks. After setting up Claude Code Actions, modify your workflow:

name: Claude Code

on:
  issue_comment:
    types: [created]
  # ... other triggers

jobs:
  claude:
    if: |
      (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
      # ... other conditions
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: read
      issues: read
      id-token: write
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 1

      - name: Prepare Environment
        run: |
          curl -fsSL https://bun.sh/install | bash
          mkdir -p $HOME/.claude-code-router
          cat << 'EOF' > $HOME/.claude-code-router/config.json
          {
            "log": true,
            "OPENAI_API_KEY": "${{ secrets.OPENAI_API_KEY }}",
            "OPENAI_BASE_URL": "https://api.deepseek.com",
            "OPENAI_MODEL": "deepseek-chat"
          }
          EOF
        shell: bash

      - name: Start Claude Code Router
        run: |
          nohup ~/.bun/bin/bunx @musistudio/[email protected] start &
        shell: bash

      - name: Run Claude Code
        id: claude
        uses: anthropics/claude-code-action@beta
        env:
          ANTHROPIC_BASE_URL: http://localhost:3456
        with:
          anthropic_api_key: "any-string-is-ok"

This enables cost-effective automations:

  • Use cheaper models for routine tasks
  • Schedule resource-intensive operations during off-peak hours
  • Route to specialized models based on task requirements

🆕 Recent Improvements

v1.1.0 Features

  • 🔄 Intelligent Retry Logic: Automatic retries with exponential backoff for transient failures
  • 🛡️ Circuit Breaker: Prevents cascading failures by temporarily disabling failing providers
  • 📋 Configuration Validation: JSON schema validation with helpful error messages
  • 🔥 Hot Configuration Reload: Update config without restarting the service
  • 📊 Enhanced Logging: Structured logging with Winston, log rotation, and debug mode
  • ⚡ Graceful Shutdown: Proper cleanup and connection draining on service stop
  • 🎯 Better Error Messages: Clear, actionable error messages with suggested fixes

🔧 Troubleshooting

Common Issues

  1. "API error (connection error)"

    • Check your API keys and base URLs
    • Verify network connectivity and proxy settings
    • Ensure the router service is running (ccr status)
    • Check logs at ~/.claude-code-router/logs/ for details
  2. Model not responding

    • Verify the model name in your config
    • Check if the provider supports the model
    • Review transformer compatibility
    • Enable debug mode: set LOG_LEVEL: "debug" in config
  3. "No allowed providers available"

    • Ensure at least one provider is configured
    • Check provider API key validity
    • Verify model names match provider's supported models
    • Run config validation: The service will validate on startup
  4. Configuration errors

    • The service now validates configuration on startup
    • Check for detailed error messages in the console
    • See Configuration Schema for all options

📝 Documentation

📄 License

This project is licensed under the MIT License.