npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

clawmux

v0.3.9

Published

Smart model routing + context compression proxy for OpenClaw

Downloads

2,232

Readme

ClawMux logo

ClawMux

Smart model routing + context compression proxy for OpenClaw.

Features

  • 🧠 Smart Routing: Embedding-based semantic classification → LIGHT/MEDIUM/HEAVY tier → automatic model selection
  • 📦 Context Compression: Preemptive background summarization at configurable threshold (default 75%)
  • 🔌 All Providers: Supports all OpenClaw providers via 6 API format adapters
  • Zero Config Auth: Uses OpenClaw's existing provider credentials — no separate API keys
  • 📊 Cost Tracking: Real-time savings stats at /stats endpoint
  • 🔄 Hot Reload: Config changes apply without restart

Installation

Copy and paste this into your OpenClaw agent:

Install and configure ClawMux by following the instructions here:
curl -s https://raw.githubusercontent.com/HyeokjaeLee/ClawMux/refs/heads/main/docs/guide/installation.md

Configuration

ClawMux stores its config at ~/.openclaw/clawmux.json (next to openclaw.json). clawmux init creates it automatically. You can also copy clawmux.example.json as a starting point:

cp clawmux.example.json ~/.openclaw/clawmux.json

Adjust as needed:

{
  "compression": {
    "threshold": 0.75,       // trigger compression at 75% of context window
    "model": "anthropic/claude-3-5-haiku-20241022",  // model used for summarization (provider/model format)
    "targetRatio": 0.6       // compress to 60% of original token count
  },
  "routing": {
    "models": {
      "LIGHT": "anthropic/claude-3-5-haiku-20241022",
      "MEDIUM": "anthropic/claude-sonnet-4-20250514",
      "HEAVY": "anthropic/claude-opus-4-20250514"
      // Model IDs use 'provider/model' format. Do NOT use "clawmux" as provider — causes infinite loops
    }
  },
  "server": {
    "port": 3456,
    "host": "127.0.0.1"
  }
}

Config is watched for changes. Edit ~/.openclaw/clawmux.json while the proxy is running and it reloads automatically. Override the path with CLAWMUX_CONFIG=/path/to/clawmux.json.

Cross-Provider Routing

Mix models from different providers by tier. ClawMux automatically translates request and response formats between providers:

{
  "routing": {
    "models": {
      "LIGHT": "zai/glm-5",                          // ZAI (openai-completions)
      "MEDIUM": "anthropic/claude-sonnet-4-20250514",  // Anthropic (anthropic-messages)
      "HEAVY": "openai/gpt-5.4"                       // OpenAI (openai-completions)
    }
  }
}

All three providers must be configured in your openclaw.json. ClawMux handles format translation transparently — a request arriving in Anthropic format gets translated to OpenAI format when routed to GPT, and the response is translated back to Anthropic format before returning to OpenClaw.

Supported translation pairs: Anthropic ↔ OpenAI ↔ Google ↔ Ollama ↔ Bedrock (all combinations).

Provider

ClawMux registers as a single provider clawmux in OpenClaw with model auto. It accepts all API formats (Anthropic, OpenAI, Google, Ollama, Bedrock) and translates between them automatically.

openclaw provider clawmux

How It Works

OpenClaw → ClawMux Proxy (localhost:3456) → Upstream Provider(s)
              │
              ├── 1. Classify complexity (embedding model, ~4ms first run, <1ms cached)
              ├── 2. Select tier → LIGHT/MEDIUM/HEAVY
              ├── 3. Compress context if threshold exceeded
              ├── 4. Translate request format if cross-provider
              ├── 5. Forward to upstream with correct model
              └── 6. Translate response back to original format

Routing tiers map to model IDs you configure. A local embedding model (Xenova/multilingual-e5-small) classifies the semantic complexity of each request using nearest-centroid classification (~8ms p50), supporting both Korean and English. Short queries are detected by a lightweight heuristic and routed to LIGHT tier directly. No external API calls are needed for classification.

Low confidence fallback: When the classifier's confidence is low, the request is routed to MEDIUM tier. This prevents unreliable classifications from sending requests to an inappropriate tier — MEDIUM provides a safe cost/quality balance.

Context compression runs in the background after each response. When the conversation approaches the configured threshold, ClawMux summarizes older messages before the next request goes out. This keeps costs down on long conversations without interrupting the flow.

Context Window Resolution

ClawMux resolves each model's context window using this priority chain:

  1. ~/.openclaw/clawmux.json routing.contextWindows — explicit per-model override
  2. openclaw.json models.providers[provider].models[].contextWindow — user config
  3. OpenClaw built-in catalog — pi-ai model database (812+ models)
  4. Default: 200,000 tokens

Compression threshold uses the minimum context window across all routing models, since compression happens before routing decides which model to use.

API Endpoints

| Method | Path | Description | |---|---|---| | GET | /health | Health check | | GET | /stats | Cost savings statistics | | POST | /v1/messages | Anthropic Messages | | POST | /v1/chat/completions | OpenAI Chat Completions | | POST | /v1/responses | OpenAI Responses | | POST | /v1beta/models/* | Google Generative AI | | POST | /api/chat | Ollama | | POST | /model/*/converse-stream | Bedrock |

Development

bun run dev          # start with watch mode
bun test             # run all tests
bun run typecheck    # type check without emit

Tests are co-located with source files as *.test.ts.

Uninstall

Copy and paste this into your OpenClaw agent:

Uninstall ClawMux by following the instructions here:
curl -s https://raw.githubusercontent.com/HyeokjaeLee/ClawMux/refs/heads/main/docs/guide/uninstallation.md