@langcost/db
v0.1.3
Published
Database schema and repositories for langcost
Readme
Why LangCost?
You run OpenClaw and your LLM bill keeps climbing — but you can't tell why.
The provider dashboard shows total tokens. It doesn't show that your agent looped 12 times on the same tool call, or that 30% of output tokens were unnecessarily verbose, or that prompt caching wasn't working.
LangCost reads your session logs and tells you exactly what's wasting money:
Scanned 12 sessions from openclaw
├── 12 traces, 1,812 spans, 1,948 messages
├── Total cost: $73.24
├── Estimated waste: $8.60 (11.7%)
└── Top waste: high_output (72.7%), tool_failure_waste (26.7%), low_cache (0.6%)No API keys. No cloud. Everything runs locally. Your data never leaves your machine.
Quick Start
Three commands. That's it.
# Install the CLI + the OpenClaw adapter
npm install -g langcost @langcost/adapter-openclaw
# or: bun add -g langcost @langcost/adapter-openclaw
# Scan your sessions
langcost scan --source openclaw
# Open the dashboard
langcost dashboardWhy two packages?
langcostis the core CLI — analysis engine, dashboard, and reports. Adapters are plugins that read data from specific sources. You install only the adapters you need. Right now OpenClaw is the only adapter — more are coming.
LangCost auto-detects your OpenClaw installation at ~/.openclaw, ingests your sessions, runs waste analysis, and serves a local dashboard at http://localhost:3737.
# Point to a custom OpenClaw directory
langcost scan --source openclaw --path /path/to/openclaw
# Analyze a single session file
langcost scan --source openclaw --file /path/to/session.jsonl
# Scan older sessions (default is last 30 days)
langcost scan --source openclaw --since 90d
# Force re-analysis of everything
langcost scan --source openclaw --forceFeatures
🔍 Waste Detection
Six rules that automatically find wasted spend in every session:
| | Rule | What it finds |
|:---:|------|-------------|
| 🔴 | Tool Failures | Failed tool calls that burned tokens for nothing — bash: command not found, 12 calls failed, $1.65 wasted |
| 🟡 | Agent Loops | Agent stuck calling the same tools in a cycle — read → bash → read → bash repeated 8 times |
| 🟡 | Retry Patterns | User re-prompting because the agent failed — 3 similar messages in a row, agent struggling |
| 🟠 | High Output | Spans with output 3x+ the session average — one response used 4,200 tokens when peers averaged 380 |
| 🟢 | Low Cache | Prompt caching disabled or underused — paying full input price on every call |
| 🔵 | Model Insight | Flags expensive model usage — 100% Opus usage, helps you decide when cheaper models suffice |
Every finding includes the dollar amount wasted and a specific recommendation to fix it.
🔬 Trace Explorer
Expand any session to see the full execution timeline — every LLM call and tool call in order:
#1 LLM opus-4-5 in:2.8K out:141 $0.017 ok
├── read README.md ok
└── read src/main.ts ok
#2 LLM opus-4-5 in:5.2K out:380 $0.083 ok
├── bash ls -la ok
├── write src/fix.ts ok
└── bash bun test ✗ error
#3 LLM opus-4-5 in:8.1K out:520 $0.130 ok
└── edit src/fix.ts okRead exactly what the agent did, which tools it called, what failed, and what each step cost.
📊 Dashboard
A local web UI at localhost:3737:
- Trace table — all sessions with cost, waste, status. Sortable and filterable.
- Expandable rows — click to see waste findings + execution timeline inline
- Cost overview — total spend, waste percentage, cost-over-time chart
- Model insights — which models you're using and what they cost
- Recommendations — prioritized list of what to fix first
💻 CLI Reports
# All sessions sorted by cost
langcost report --sort costTrace │ Model │ Cost │ Waste │ Status
─────────────────────────┼──────────┼────────┼────────┼────────
before-compaction │ opus-4-5 │ $42.60 │ $6.03 │ error
expensive-session │ opus-4 │ $0.28 │ $0.11 │ ok
simple-session │ sonnet-4 │ $0.002 │ $0.001 │ ok# Deep dive into one session
langcost report --trace <trace-id>
# Only sessions with tool failures
langcost report --category tool_failure_waste
# JSON for scripting
langcost report --format json💰 22 Models Supported
Built-in pricing for Anthropic, OpenAI, Google, DeepSeek, and Mistral:
| Provider | Models | |----------|--------| | Anthropic | Opus 4, Sonnet 4, Haiku 4.5, Haiku 3.5 | | OpenAI | GPT-4.1, GPT-4.1-mini, GPT-4.1-nano, GPT-4o, GPT-4o-mini, o3, o3-mini, o4-mini | | Google | Gemini 2.5 Pro, 2.5 Flash, 2.0 Flash, 2.0 Flash Lite | | DeepSeek | V3 (chat), R1 (reasoner) | | Mistral | Large, Small, Codestral |
Using a self-hosted or unlisted model? Costs show as $0 but all token counts and waste detection still work. Custom pricing support is coming soon.
How It Works
~/.openclaw/sessions/*.jsonl
│
▼
┌─────────┐
│ ingest │ Read JSONL → normalize to traces, spans, messages
└────┬────┘
▼
┌─────────┐
│ analyze │ Run 6 waste detection rules
└────┬────┘
▼
┌─────────┐
│ store │ SQLite at ~/.langcost/langcost.db
└────┬────┘
│
┌────┴────┐
▼ ▼
CLI Dashboard
report localhost:3737- Everything runs locally — no cloud, no API keys, no tracking
- Data stays in a single SQLite file on your machine
- Keeps the 500 most recent sessions to manage disk space
- Plugin architecture — adapters handle data ingestion, analyzers handle intelligence
CLI Reference
langcost scan --source <adapter> [options]
--source <adapter> Required. "openclaw"
--path <path> Override data source path
--file <path> Analyze a single session file
--since <duration> Default: 30d. Accepts: 7d, 30d, 90d, all
--force Re-ingest and re-analyze everything
--db <path> Override database pathlangcost report [options]
--format <fmt> table (default) | json | markdown
--sort <field> cost | waste | date
--limit <n> Number of traces (default: 20)
--trace <id> Detailed single-trace report
--category <cat> Filter by waste category
--db <path> Override database pathlangcost dashboard [options]
--port <port> Default: 3737
--db <path> Override database pathlangcost status
--db <path> Override database pathUpcoming
| | Feature | Description | |:---:|---------|-------------| | 🧭 | Fault Attribution | Trace failures backwards to find the root cause — not just which step errored, but which upstream agent caused it | | 🧩 | More Waste Rules | Unused tool schemas, duplicate RAG chunks, unbounded conversation history, uncached system prompts | | 🔌 | More Adapters | Pluggable data sources beyond OpenClaw — bring your own traces | | 🏷️ | Custom Model Pricing | Set input/output/cache prices for self-hosted and unlisted models |
Contributing
LangCost has a plugin architecture. Three easy ways to contribute:
🧩 Add a waste rule — standalone function in
packages/analyzers/src/rules/. Copy an existing rule as a starting point.
💲 Update model pricing — edit
packages/core/src/pricing/providers.ts. Add new models or fix outdated prices.
🔌 Build an adapter — npm package implementing
IAdapterfrom@langcost/core. The CLI discovers it automatically.
