llm-usage-metrics
v0.1.11
Published
CLI for aggregating local LLM usage metrics from pi, codex, and opencode sessions
Readme
llm-usage-metrics
CLI to aggregate local LLM usage from:
~/.pi/agent/sessions/**/*.jsonl~/.codex/sessions/**/*.jsonl- OpenCode SQLite DB (auto-discovered or provided via
--opencode-db)
Reports are available for daily, weekly (Monday-start), and monthly periods.
Project documentation is available in docs/.
Built-in adapters currently support 3 sources: .pi, .codex, and OpenCode SQLite. The codebase is structured to add more sources (for example Claude/Gemini exports) through the SourceAdapter pattern. See CONTRIBUTING.md.
Install
npm install -g llm-usage-metricsOr run without global install:
npx --yes llm-usage-metrics daily(npx llm-usage daily works when the project is already installed locally.)
Runtime notes:
- OpenCode parsing requires Node.js 24+ (
node:sqlite). - Bun is supported for dependency/scripts workflow, but OpenCode report runs should use Node-based CLI execution.
- Example local execution against built dist:
node dist/index.js daily --source opencode --opencode-db /path/to/opencode.db
Update checks
When installed globally, the CLI performs a lightweight npm update check on startup.
Behavior:
- uses a local cache (
<platform-cache-root>/llm-usage-metrics/update-check.json; defaults to~/.cache/llm-usage-metrics/update-check.jsonon Linux whenXDG_CACHE_HOMEis unset) with a 1-hour default TTL - optional session-scoped cache mode via
LLM_USAGE_UPDATE_CACHE_SCOPE=session - skips checks for
--help/--versioninvocations - skips checks when run through
npx - prompts for install + restart only in interactive TTY sessions
- prints a one-line notice in non-interactive sessions
To force-skip startup update checks:
LLM_USAGE_SKIP_UPDATE_CHECK=1 llm-usage dailyRuntime environment overrides
You can tune runtime behavior with environment variables:
LLM_USAGE_SKIP_UPDATE_CHECK: skip startup update check when set to1LLM_USAGE_UPDATE_CACHE_SCOPE: cache scope for update checks (globaldefault,sessionto scope by terminal shell session)LLM_USAGE_UPDATE_CACHE_SESSION_KEY: optional custom session key whenLLM_USAGE_UPDATE_CACHE_SCOPE=session(defaults to parent shell PID)LLM_USAGE_UPDATE_CACHE_TTL_MS: update-check cache TTL in milliseconds (clamped:0..2592000000; use0to check on every CLI run)LLM_USAGE_UPDATE_FETCH_TIMEOUT_MS: update-check network timeout in milliseconds (clamped:200..30000)LLM_USAGE_PRICING_CACHE_TTL_MS: pricing cache TTL in milliseconds (clamped:60000..2592000000)LLM_USAGE_PRICING_FETCH_TIMEOUT_MS: pricing fetch timeout in milliseconds (clamped:200..30000)LLM_USAGE_PARSE_MAX_PARALLEL: max concurrent file parses per source adapter (clamped:1..64)
Example:
LLM_USAGE_PARSE_MAX_PARALLEL=16 LLM_USAGE_PRICING_FETCH_TIMEOUT_MS=8000 llm-usage monthlyUsage
Daily report (default terminal table)
llm-usage dailyWeekly report with custom timezone
llm-usage weekly --timezone Europe/ParisMonthly report with date range
llm-usage monthly --since 2026-01-01 --until 2026-01-31Markdown output
llm-usage daily --markdownJSON output
llm-usage daily --jsonOffline pricing (use cached LiteLLM pricing only)
llm-usage monthly --pricing-offlineOverride pricing URL
llm-usage monthly --pricing-url https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.jsonPricing behavior notes:
- LiteLLM is the active pricing source.
- explicit
costUsd: 0events are re-priced from LiteLLM when model pricing is available. - when pricing cannot be loaded from LiteLLM (or cache in offline mode), report generation fails fast.
Custom session directories
llm-usage daily --pi-dir /path/to/pi/sessions --codex-dir /path/to/codex/sessionsOr use generic source-id mapping (repeatable):
llm-usage daily --source-dir pi=/path/to/pi/sessions --source-dir codex=/path/to/codex/sessionsDirectory override rules:
--source-diris directory-only (currentlypiandcodex).--source-dir opencode=...is invalid and points to--opencode-db.--opencode-db <path>sets an explicit OpenCode SQLite DB path.
OpenCode DB override:
llm-usage daily --opencode-db /path/to/opencode.dbOpenCode path precedence:
- explicit
--opencode-db - deterministic OS-specific default path candidates
Backfill example from a historical DB snapshot:
llm-usage monthly --source opencode --opencode-db /archives/opencode-2026-01.db --since 2026-01-01 --until 2026-01-31OpenCode safety notes:
- OpenCode DB is opened in read-only mode
- unreadable/missing explicit paths fail fast with actionable errors
- OpenCode CLI is optional for troubleshooting and not required for runtime parsing
Filter by source (--source)
Use --source to limit reports to one or more source ids.
Supported source ids:
picodexopencode
Behavior:
- repeatable or comma-separated (
--source pi --source codexor--source pi,codex) - case-insensitive source id matching
- unknown ids fail fast with a validation error
Examples:
# only codex data
llm-usage monthly --source codex
# only pi data
llm-usage monthly --source pi
# only OpenCode data
llm-usage monthly --source opencode
# multiple sources
llm-usage monthly --source pi --source codex
llm-usage monthly --source pi,codex
# OpenCode source with explicit DB path
llm-usage monthly --source opencode --opencode-db /path/to/opencode.dbFilter by provider (--provider)
Use --provider to keep only events whose provider contains the filter text.
Behavior:
- case-insensitive substring match
- optional flag (when omitted, all providers are included)
- works together with
--sourceand--model
Examples:
# all OpenAI-family providers
llm-usage monthly --provider openai
# GitHub Models providers
llm-usage monthly --provider github
# source + provider together
llm-usage monthly --source codex --provider openaiFilter by model (--model)
--model supports repeatable and comma-separated filters. Matching is case-insensitive.
Per filter value:
- if an exact model id exists in the currently selected event set (after source/provider/date filtering), exact matching is used
- otherwise, substring matching is used
Examples:
# substring match (all Claude-family models)
llm-usage monthly --model claude
# exact match when present
llm-usage monthly --model claude-sonnet-4.5
# multiple model filters
llm-usage monthly --model claude --model gpt-5
llm-usage monthly --model claude,gpt-5
# source + provider + model together
llm-usage monthly --source opencode --provider openai --model gpt-4.1Per-model columns (opt-in detailed table layout)
Default output is compact (model names only in the Models column).
Use --per-model-columns to render per-model multiline metrics in each numeric column:
llm-usage monthly --per-model-columns
llm-usage monthly --markdown --per-model-columnsOutput features
Terminal UI
The CLI provides an enhanced terminal output with:
- Boxed report header showing the report type and timezone
- Session summary displayed at startup (session files and event counts per source)
- Pricing source info indicating whether data was loaded from cache or fetched remotely
- Environment variable overrides displayed when active
- Models displayed as bullet points for better readability
- Rounded table borders and improved color scheme
Example output:
ℹ Found 12 session file(s) with 45 event(s)
• pi: 8 file(s), 32 events
• codex: 4 file(s), 13 events
ℹ Loaded pricing from cache
┌──────────────────────────────────────────────────────────┐
│ Monthly Token Usage Report (Timezone: Africa/Casablanca) │
└──────────────────────────────────────────────────────────┘
╭────────────┬──────────┬──────────────────────╮
│ Period │ Source │ Models │
├────────────┼──────────┼──────────────────────┤
│ Feb 2026 │ pi │ • gpt-5.2 │
│ │ │ • gpt-5.2-codex │
╰────────────┴──────────┴──────────────────────╯Report structure
Each report includes:
- source rows (
pi,codex,opencode) for each period - a per-period combined subtotal row (only when multiple sources exist in that period)
- a final grand total row across all periods
Columns:
- Period
- Source
- Models
- Input
- Output
- Reasoning
- Cache Read
- Cache Write
- Total
- Cost
Development
bun install
bun run lint
bun run typecheck
bun run test
bun run format:check