npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

modelrelay

v1.7.0

Published

OpenAI-compatible local router that benchmarks free coding models across providers and forwards requests to the best available model.

Readme

🚀 modelrelay

npm version GitHub stars Join Discord

Join our Discord for discussions, feature requests, and community support.


🔥 100% Free • Auto-Routing • 80+ Models • 10+ Providers • OpenAI-Compatible

modelrelay is an OpenAI-compatible local router that benchmarks free coding models across top providers and automatically forwards your requests to the best available model.

✨ Why use modelrelay?

  • 💸 Completely Free: Stop paying for API usage. We seamlessly provide access to robust free models.
  • 🧠 State-of-the-Art (SOTA) Models: Out-of-the-box availability for top-tier models including Kimi K2.5, Minimax M2.5, GLM 5, Deepseek V3.2, and more.
  • 🏢 Reliable Providers: We route requests securely through trusted, high-performance platforms like NVIDIA, Groq, OpenRouter, and Google.
  • Lightning Fast: The built-in benchmark continually evaluates metrics to pick the fastest and most capable LLM for your request.
  • 🔄 OpenAI-Compatible: A perfect drop-in replacement that works seamlessly with your existing tools, scripts, and workflows.

🚀 Install

npm install -g modelrelay

⚡ Quick Start

# 1) Onboard: save provider API keys and optionally auto-configure integrations
modelrelay onboard

# 2) Start the local router (default port 7352)
modelrelay

Router endpoint:

  • Base URL: http://127.0.0.1:7352/v1
  • API key: any string
  • Model: auto-fastest (router picks actual backend)

OpenCode Quick Start

modelrelay onboard can auto-configure OpenCode.

If you want manual setup, put this in ~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "router": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "modelrelay",
      "options": {
        "baseURL": "http://127.0.0.1:7352/v1",
        "apiKey": "dummy-key"
      },
      "models": {
        "auto-fastest": {
          "name": "Auto Fastest"
        }
      }
    }
  },
  "model": "router/auto-fastest"
}

OpenClaw Quick Start

modelrelay onboard can auto-configure OpenClaw.

If you want manual setup, merge this into ~/.openclaw/openclaw.json:

{
  "models": {
    "providers": {
      "modelrelay": {
        "baseUrl": "http://127.0.0.1:7352/v1",
        "api": "openai-completions",
        "apiKey": "no-key",
        "models": [
          { "id": "auto-fastest", "name": "Auto Fastest" }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "modelrelay/auto-fastest"
      },
      "models": {
        "modelrelay/auto-fastest": {}
      }
    }
  }
}

CLI

modelrelay [--port <number>] [--log] [--ban <model1,model2>]
modelrelay onboard [--port <number>]
modelrelay install --autostart
modelrelay start --autostart
modelrelay uninstall --autostart
modelrelay status --autostart
modelrelay update
modelrelay autoupdate [--enable|--disable|--status] [--interval <hours>]
modelrelay autostart [--install|--start|--uninstall|--status]

Request terminal logging is disabled by default. Use --log to enable it.

modelrelay install --autostart also triggers an immediate start attempt so you do not need a separate command after install.

During modelrelay onboard, you will also be prompted to enable auto-start on login.

modelrelay update upgrades the global npm package and, when autostart is configured, stops the background service first and starts it again after the update.

Auto-update is enabled by default. While the router is running, modelrelay checks npm periodically (default: every 24 hours) and applies updates automatically.

Use modelrelay autoupdate --status to inspect state, modelrelay autoupdate --disable to turn it off, and modelrelay autoupdate --enable --interval 12 to re-enable with a custom interval.

Config

  • Router config file: ~/.modelrelay.json
  • API key env overrides:
    • NVIDIA_API_KEY
    • GROQ_API_KEY
    • CEREBRAS_API_KEY
    • SAMBANOVA_API_KEY
    • OPENROUTER_API_KEY
    • CODESTRAL_API_KEY
    • HYPERBOLIC_API_KEY
    • SCALEWAY_API_KEY
    • QWEN_CODE_API_KEY (or DASHSCOPE_API_KEY)
    • GOOGLE_API_KEY

For Qwen Code, modelrelay supports both API keys and Qwen OAuth cached credentials (~/.qwen/oauth_creds.json). If OAuth credentials exist, modelrelay will use them and refresh access tokens automatically. You can also start OAuth directly from the Web UI Providers tab using Login with Qwen Code.


⭐️ If you find modelrelay useful, please consider starring the repo!