benchmark-proxy
v0.1.2
Published
LLM Benchmark Transparent Proxy — intercept LLM API requests and collect token usage metrics
Readme
Benchmark Proxy
Transparent LLM API proxy that intercepts requests between AI assistant platforms and LLM providers, collecting token usage metrics for benchmark analysis.
Why
Enterprise AI assistant platforms trigger multiple LLM calls per user request (tool-call loops, context injection, system prompts), making true token consumption invisible. Existing observability tools (Langfuse, Arize AI) require SDK integration — code-level instrumentation that couples your platform to the monitoring tool.
Benchmark Proxy takes a different approach: zero-intrusion collection. Change one environment variable, restart the platform, and usage data flows automatically. No SDK, no code changes.
Architecture
┌──────────────┐ ┌────────────────────────────────────┐ ┌─────────────┐
│ AI Platform │───>│ Proxy Layer (:9200) │───>│LLM Provider │
│ (EnClaws / │ │ │ │ (Any HTTPS │
│ OpenClaw) │<───│ /proxy/{domain}/{path} │<───│ endpoint) │
│ │ │ → https://{domain}/{path} │ └─────────────┘
└──────────────┘ └──────────────────┬─────────────────┘
│
┌──────────────────┴─────────────────┐
│ Benchmark API + Storage │
│ POST /benchmark/session/start │
│ POST /benchmark/session/end │
│ GET /benchmark/turns │
│ GET /benchmark/health │
│ SQLite (WAL) │
└────────────────────────────────────┘Quick Start
Install
# Clone and build locally
git clone https://github.com/hashSTACS-Global/benchmark-proxy.git
cd benchmark-proxy
pnpm install
pnpm buildStart the proxy
node --env-file=.env.dev dist/index.js start --port 9200Point your AI platform to the proxy
The proxy intercepts LLM traffic by replacing the original API base URL with the proxy address. The routing rule is:
Original: https://{domain}/{path}
Proxied: http://{proxy-host}/proxy/{domain}/{path}Only the base_url (or baseUrl) needs to change — API keys, request body, and all other configurations remain untouched. The proxy forwards requests transparently to the original upstream.
OpenClaw — Update the model's baseUrl in openclaw.json:
{
"models": [
{
"name": "qwen-max",
// Before: "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1"
"baseUrl": "http://localhost:9200/proxy/dashscope.aliyuncs.com/compatible-mode/v1",
"apiKey": "sk-xxx"
},
{
"name": "deepseek-chat",
// Before: "baseUrl": "https://api.deepseek.com/v1"
"baseUrl": "http://localhost:9200/proxy/api.deepseek.com/v1",
"apiKey": "sk-xxx"
}
]
}EnClaws — Update the base_url field in the tenant_models table:
-- Before
-- base_url = 'https://coding.dashscope.aliyuncs.com/v1'
-- After: replace the scheme + domain with the proxy address, keep the path unchanged
UPDATE tenant_models
SET base_url = 'http://localhost:9200/proxy/coding.dashscope.aliyuncs.com/v1'
WHERE base_url = 'https://coding.dashscope.aliyuncs.com/v1';Any HTTPS LLM endpoint can be proxied this way — no code changes or proxy-side configuration required to add new providers.
Collect benchmark data
Benchmark Proxy exposes a standard REST API for session management and data collection. You can integrate it directly via API calls, or use benchmark-cli for automated test execution and report generation.
Option A: Use benchmark-cli (recommended)
benchmark-cli automates the full workflow — send test cases to AI platforms, manage proxy sessions, collect metrics, and generate reports.
# In benchmark-cli's .env
BENCH_COLLECTOR_ENDPOINT=http://localhost:9200
# Run benchmark
bench run cases/my-test.json --tag baseline
bench report --latest --format htmlbenchmark-cli will automatically call session/start before each test case, associate LLM calls during execution, call session/end when done, and retrieve turn-level metrics for reporting.
Option B: Direct API integration
For custom integrations or ad-hoc debugging, call the Benchmark API directly:
# 1. Start a session (becomes the global active session)
curl -X POST http://localhost:9200/benchmark/session/start
# Returns: { "sessionId": "abc-123", "active": true }
# 2. All proxied LLM calls are now automatically associated with the active session.
# Trigger your AI platform to process a request — the proxy records every LLM call.
# 3. End the session
curl -X POST http://localhost:9200/benchmark/session/end \
-H "Content-Type: application/json" \
-d '{"sessionId": "abc-123"}'
# 4. Query turn-level aggregated results
curl http://localhost:9200/benchmark/turns?sessionId=abc-123Proxy Routing
The proxy uses flexible URL-based routing — no hardcoded provider mapping:
/proxy/{domain}/{path} → https://{domain}/{path}| Example Proxy URL | Upstream |
|-------------------|----------|
| /proxy/api.anthropic.com/v1/messages | https://api.anthropic.com/v1/messages |
| /proxy/api.openai.com/v1/chat/completions | https://api.openai.com/v1/chat/completions |
| /proxy/openrouter.ai/api/v1/chat/completions | https://openrouter.ai/api/v1/chat/completions |
| /proxy/api.deepseek.com/v1/chat/completions | https://api.deepseek.com/v1/chat/completions |
| /proxy/coding.dashscope.aliyuncs.com/v1/... | https://coding.dashscope.aliyuncs.com/v1/... |
Session Management
POST /benchmark/session/start— Creates a session and sets it as the global active sessionPOST /benchmark/session/end— Ends the session and clears the global active session- LLM calls are automatically associated with the active session (no
x-benchmark-sessionheader needed) - LLM calls are recorded even without an active session (session_id will be null)
- Turn detection: Automatically groups LLM calls into turns by tracking user message changes
Development
pnpm install
pnpm dev # Start with tsx (hot reload)
pnpm build # Compile TypeScript
pnpm start # Run compiled outputLicense
MIT
