@cognisos/liminal
v2.5.1
Published
Liminal — transparent LLM context compression proxy
Downloads
477
Readme
@cognisos/liminal
Transparent LLM context compression proxy. Liminal sits between your AI coding tools and the LLM API, compressing context to save tokens, reduce costs, and extend effective context windows — all without changing your workflow.
Install
npm install -g @cognisos/liminalQuick Start
liminal init # Guided setup — auth, tool detection, config
liminal start # Start the compression proxy
liminal # Launch the TUI dashboardFeatures
- Zero-config compression — Routes through Claude Code, Codex, Cursor, and OpenAI-compatible tools automatically
- TUI dashboard — Run
liminalto launch a full-screen live dashboard with stats, config, and logs - Setup wizard — 5-step guided setup with verification and error recovery
- Stats tracking — Session and all-time metrics with token savings, context extension, and cost estimates
- Cursor hooks — Transparent file compression via preToolUse hooks (no sudo, no TLS hacks)
- Multi-session — Concurrent session management with circuit breakers and graceful degradation
- Zero UI dependencies — All terminal rendering uses raw ANSI codes
Commands
liminal Launch TUI dashboard
liminal init Guided setup wizard
liminal start [-d] [--port PORT] Start the compression proxy
liminal stop Stop the proxy
liminal status Quick health check
liminal stats [--json] Compression metrics & savings
liminal config [--set k=v] [--get k] View or edit configuration
liminal logs [--follow] [--lines N] View proxy logs
liminal setup cursor [--teardown] Install Cursor compression hooks
liminal login Log in or create an account
liminal logout Log out
liminal trust-ca Install CA cert (TLS intercept)
liminal untrust-ca Remove CA cert
liminal uninstall Remove all Liminal configurationTUI Dashboard
Run liminal with no arguments to launch the interactive dashboard:
- Dashboard — Live daemon health, tool routing status, session metrics, recent activity
- Stats — Token savings, cost impact, context extension (session + all-time)
- Config — Current configuration at a glance
- Logs — Colorized live tail of daemon logs
Navigate with arrow keys or Tab. Press q to exit.
How It Works
- Proxy — Liminal runs a local HTTP proxy (default port 3141)
- Intercept — Your AI tool sends API requests through the proxy
- Compress — RSC (Recursive Semiotic Computation) normalizes and compresses the context
- Forward — Compressed request goes to the upstream LLM API
- Learn — Patterns are learned over time to improve compression
Supported protocols: Anthropic Messages API, OpenAI Chat Completions, OpenAI Responses API.
Configuration
Config is stored at ~/.liminal/config.json. Key settings:
| Key | Default | Description |
|-----|---------|-------------|
| port | 3141 | Proxy listen port |
| compressionThreshold | 100 | Min tokens to compress |
| learnFromResponses | true | Learn patterns from LLM responses |
| latencyBudgetMs | 10000 | Max compression time before fallback |
| enabled | true | Global compression toggle |
Requirements
- Node.js >= 18.0.0
- A Cognisos account (created during
liminal init)
License
MIT
