@opencompress/openclaw
v3.0.13
Published
OpenCompress for OpenClaw — save tokens and sharpen quality on any LLM
Maintainers
Readme
We don't sell tokens. We don't resell API access.
You use your own keys, your own models, your own account. Billed directly by Anthropic, OpenAI, or whoever you choose. We compress the traffic so you get charged less and your agent thinks clearer.
Compression doesn't just save money. It removes the noise. Leaner prompts mean the model focuses on what matters. Shorter context, better answers, better code.
No vendor lock-in. Uninstall anytime. Everything goes back to exactly how it was.
How it works
┌──────────────────────────────┐
│ Your OpenClaw Agent │
│ │
│ model: opencompress/auto │
└──────────────┬───────────────┘
│
▼
┌──────────────────────────────┐
│ Local Proxy (:8401) │
│ │
│ reads your provider key │
│ from OpenClaw config │
└──────────────┬───────────────┘
│
▼
┌──────────────────────────────┐
│ opencompress.ai │
│ │
│ compress → forward │
│ your key in header │
│ never stored │
└──────────────┬───────────────┘
│
▼
┌──────────────────────────────┐
│ Your LLM Provider │
│ (Anthropic / OpenAI) │
│ │
│ sees fewer tokens │
│ charges you less │
└──────────────────────────────┘Install
openclaw plugins install @opencompress/openclaw
openclaw onboard opencompress
openclaw gateway restartSelect opencompress/auto as your model. Done.
Models
Every provider you already have gets a compressed mirror:
opencompress/auto → your default, compressed
opencompress/anthropic/claude-sonnet-4 → Claude Sonnet, compressed
opencompress/anthropic/claude-opus-4-6 → Claude Opus, compressed
opencompress/openai/gpt-5.4 → GPT-5.4, compressedSwitch back to the original model anytime to disable compression.
Commands
/compress-stats view savings, balance, token metrics
/compress show status and available modelsWhat we believe
Your keys are yours.
We read your API key from OpenClaw's config at runtime, pass it in a per-request header, and discard it immediately. We never store, log, or cache your provider credentials. Ever.
Your prompts are yours.
Prompts are compressed in-memory and forwarded. Nothing is stored, logged, or used for training. The only thing we record is token counts for billing, original vs compressed. That's it.
Zero lock-in.
We don't replace your provider. We don't wrap your billing. If you uninstall, your agents keep working exactly as before. Same keys, same models, same everything.
Failure is invisible.
If our service goes down, your requests fall back directly to your provider. No errors, no downtime, no interruption. You just temporarily lose the compression savings.
Supported providers
Anthropic Claude Sonnet, Opus, Haiku anthropic-messages
OpenAI GPT-5.x, o-series openai-completions
Google Gemini openai-compat
OpenRouter 400+ models openai-completions
Any OpenAI-compatible endpoint openai-completionsPricing
Free credit on signup. No credit card. Pay only for the tokens you save.
