@shunirr/cc-glm
v0.1.5
Published
Claude Code proxy for switching between Anthropic API and z.ai GLM
Maintainers
Readme
cc-glm
Claude Code proxy for routing requests between Anthropic API and z.ai GLM.
Features
- Configurable model routing: Route requests to different upstreams based on model name patterns with glob matching
- Model name rewriting: Transparently rewrite model names (e.g.,
claude-sonnet-*→GLM-4.7) - Thinking block transformation: Convert z.ai thinking blocks to text blocks to avoid Anthropic signature validation issues
- Singleton proxy: One proxy instance shared across multiple Claude Code sessions
- Lifecycle management: Proxy starts/stops automatically with Claude Code
- YAML configuration: Config file with
${VAR:-default}environment variable expansion
Prerequisites
- Node.js >= 18
- Claude Code CLI installed and available in PATH
- z.ai API key (
ZAI_API_KEYenv var or config file) if routing to z.ai
Installation
npm install -g @shunirr/cc-glmOr use with npx:
npx @shunirr/cc-glmUsage
Use cc-glm as a drop-in replacement for claude:
# Start Claude Code through the proxy
cc-glm
# Pass arguments to Claude Code
cc-glm -c
cc-glm -p "PROMPT"The proxy automatically:
- Starts if not already running (singleton)
- Sets
ANTHROPIC_BASE_URLto route requests through the proxy - Routes requests based on model name matching rules
- Stops when all Claude Code sessions have exited (after a grace period)
Configuration
Create ~/.config/cc-glm/config.yml:
# Claude Code CLI command path (empty = auto-detect from PATH)
claude:
path: ""
proxy:
port: 8787
host: "127.0.0.1"
upstream:
# Anthropic API (OAuth, forwards authorization header as-is)
anthropic:
url: "https://api.anthropic.com"
# z.ai GLM API
zai:
url: "https://api.z.ai/api/anthropic"
apiKey: "YOUR_API_KEY" # Or falls back to ZAI_API_KEY env var
lifecycle:
stopGraceSeconds: 8
startWaitSeconds: 8
stateDir: "${TMPDIR}/claude-code-proxy"
logging:
level: "info" # debug, info, warn, error
# Rules are evaluated top-to-bottom, first match wins
routing:
rules:
- match: "claude-sonnet-*"
upstream: zai
model: "GLM-4.7"
- match: "claude-haiku-*"
upstream: zai
model: "GLM-4.7"
- match: "glm-*"
upstream: zai
default: anthropicConfiguration Options
claude.path
Path to the Claude Code CLI executable. If empty or not specified, cc-glm will auto-detect the command from your PATH using which (Unix/macOS) or where (Windows).
claude:
path: "/usr/local/bin/claude" # Custom path
# or
path: "" # Auto-detect (default)Without a config file, all requests are routed to Anthropic API (OAuth).
Environment Variables
ZAI_API_KEY— z.ai API key (used when configapiKeyis empty)ANTHROPIC_BASE_URL— Automatically set by cc-glm to point to the proxy
Model Routing
Routing rules use glob patterns (* wildcard) and are evaluated top-to-bottom. The first matching rule wins. Each rule can optionally rewrite the model name sent to the upstream.
| Rule Pattern | Upstream | Model Sent |
|---|---|---|
| claude-sonnet-* | z.ai | GLM-4.7 |
| claude-haiku-* | z.ai | GLM-4.7 |
| glm-* | z.ai | (original) |
| (no match) | Anthropic | (original) |
How It Works
cc-glmstarts a local HTTP proxy at127.0.0.1:8787(singleton via atomic lock directory)- Sets
ANTHROPIC_BASE_URLso Claude Code sends API requests through the proxy - The proxy extracts the model name from each request body
- Routing rules determine the upstream (Anthropic or z.ai) and optional model rewrite
- Auth headers are adjusted per upstream:
- Anthropic: forwards the original OAuth
authorizationheader - z.ai: replaces
authorizationwithx-api-key
- Anthropic: forwards the original OAuth
- z.ai responses have their thinking blocks sanitized (invalid signatures removed), and when later sent to Anthropic, z.ai-origin thinking blocks are converted to text blocks to avoid signature validation errors
- After Claude Code exits, the proxy waits a grace period (default 8s) and stops if no other sessions remain
Thinking Block Transformation
Why Transformation Is Needed
Anthropic API validates thinking block signatures — each thinking block includes a cryptographic signature proving it was generated by Anthropic. z.ai thinking blocks lack valid Anthropic signatures, so the proxy must handle them differently to avoid API rejection.
Response Sanitization (z.ai → Claude Code)
When the proxy receives a response from z.ai, it sanitizes thinking blocks by removing invalid signature fields and normalizing the format. The signature store records valid Anthropic signatures so the proxy can distinguish Anthropic-origin thinking blocks from z.ai-origin ones.
Request Transformation (Claude Code → Anthropic)
When sending a request to Anthropic that contains thinking blocks from the conversation history, the proxy checks each block's signature against the signature store:
| Origin | Signature | Action | |--------|-----------|--------| | Anthropic-generated | Recorded in signature store | Passed through as-is | | z.ai-generated | Not in signature store | Converted to text block |
The conversion wraps the thinking content in XML tags:
Before (z.ai thinking block):
{
"type": "thinking",
"thinking": "This is my reasoning process...",
"signature": "invalid_signature_xyz"
}After (converted to text block):
{
"type": "text",
"text": "<previous-glm-reasoning>\nThis is my reasoning process...\n</previous-glm-reasoning>"
}This preserves the reasoning content while avoiding Anthropic's signature validation. The <previous-glm-reasoning> tags clearly mark the content as historical reasoning from z.ai.
Signature Store
The proxy maintains an in-memory signature store to track valid Anthropic signatures. Configure via signature_store in config:
signature_store:
maxSize: 1000 # Maximum signatures to store (default: 1000, max: 100000)Development
npm install
npm run build # Build with tsup
npm run dev # Build in watch mode
npm run lint # Type check (tsc --noEmit)
npm test # Run tests (watch mode)
npm run test:run # Run tests onceLicense
MIT
