@opencompress/opencompress
v2.0.1
Published
OpenCompress plugin for OpenClaw — automatic 5-layer prompt compression for any LLM
Downloads
2,649
Maintainers
Readme
OpenCompress Plugin for OpenClaw
Compress every LLM call automatically — keep your existing provider, same models, same quality, 40-70% cheaper.
How it works
Your Agent → OpenCompress (compress) → Your LLM Provider (OpenAI/Anthropic/OpenRouter/Google)You already pay for an LLM provider. OpenCompress adds a compression layer on top — your prompts are compressed through a 5-layer pipeline before reaching your provider. You pay your provider at their normal rates, and we charge only 20% of what you save.
- 53% average input token reduction
- 62% latency improvement
- 96% quality preservation (SQuALITY benchmark)
Install
openclaw plugins install @opencompress/opencompressSetup
1. Connect your LLM key
After installing, run onboard and connect your existing provider key:
openclaw onboard opencompressThe wizard auto-provisions your account ($1.00 free credit) and asks for your upstream LLM key. Supported providers:
| Key prefix | Provider |
|---|---|
| sk-proj- or sk- | OpenAI |
| sk-ant- | Anthropic |
| sk-or- | OpenRouter |
| AIza... | Google AI |
Once connected, every LLM call is compressed automatically — you pay your provider directly, we only charge the compression fee.
Don't have an LLM key? No problem — we can route through OpenRouter for you. Just skip the key step during onboard.
2. Use it
Switch to the OpenCompress provider:
/model opencompress/gpt-4o-miniThat's it. Same model IDs as your current provider — no config changes needed.
3. Connect or switch your key anytime
/compress-byok sk-proj-your-openai-key # Connect OpenAI
/compress-byok sk-ant-your-anthropic-key # Connect Anthropic
/compress-byok sk-or-your-openrouter-key # Connect OpenRouter
/compress-byok off # Switch back to router modeCommands
| Command | Description |
|---------|-------------|
| /compress-stats | Show compression savings (calls, tokens saved, cost saved) |
| /compress-byok <key> | Connect or switch your LLM provider key |
| /compress-byok off | Disconnect your key (switch to router fallback) |
Supported models (20)
Works with all major providers — use whichever models you already use:
| Provider | Models |
|----------|--------|
| OpenAI | gpt-4o, gpt-4o-mini, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, o3, o4-mini |
| Anthropic | claude-sonnet-4-6, claude-opus-4-6, claude-haiku-4-5-20251001 |
| Google | gemini-2.5-pro, gemini-2.5-flash, google/gemini-2.5-pro-preview |
| DeepSeek | deepseek/deepseek-chat-v3-0324, deepseek/deepseek-reasoner |
| Meta | meta-llama/llama-4-maverick, meta-llama/llama-4-scout |
| Qwen | qwen/qwen3-235b-a22b, qwen/qwen3-32b |
| Mistral | mistralai/mistral-large-2411 |
Pricing
You pay your LLM provider directly at their normal rates. OpenCompress charges 20% of the tokens you save — if compression saves you $1.00 in tokens, you pay us $0.20. Net saving: $0.80.
$1.00 free credit on sign-up covers ~50-100 compressed calls.
Configuration
| Key | Default | Description |
|-----|---------|-------------|
| apiKey | — | Your sk-occ-... key (set during onboard) |
| baseUrl | https://www.opencompress.ai/api | Custom API endpoint |
Uninstall
openclaw plugins uninstall opencompressLicense
MIT
