codex-token-counter
v1.0.4
Published
Lightweight HTTP service that aggregates token usage produced by Codex CLI processes on the same host.
Maintainers
Readme
Codex Token Counter
Lightweight Node.js service that aggregates token usage emitted by any Codex CLI sessions on the same host. It consumes the JSONL log files that Codex tools append to (by default ~/.codex-token-tracker/token_usage.jsonl) and exposes a small HTTP API you can deploy anywhere.
Features
- Scans the standard Codex token log locations (legacy
token_usage.jsonlplus Codex CLI rollouts under~/.codex/sessions) across every/home/*user directory, and any extra paths you provide. - Computes totals, per-source, and per-model statistics (including cached prompt tokens, reasoning tokens, and USD cost estimates) on the fly.
- Polished, auto-refreshing dashboard with quick date presets (24h/7d/30d) plus custom ranges, rendering recent activity and aggregates in an enterprise glassmorphism layout.
- REST endpoints for
/api/metricsand/api/healthwith permissive CORS so dashboards or automations can consume the data. - Works as a CLI (
codex-token-counter) that you can install globally or include in another project.
Installation
npm install -g codex-token-counter
codex-token-counter --openPrefer a one-off run? Skip the global install:
npx codex-token-counter@latest --openTo remove a previously installed global binary, run:
npm uninstall -g codex-token-counterYarn, pnpm, and corepack users can substitute their preferred package manager’s global uninstall command (yarn global remove, pnpm remove -g, etc.).
Publishing internally? Replace
codex-token-counterwith your private registry scope or install from a local path as before. The CLI opens your browser automatically when run from an interactive shell; pass--no-open(or setCODEX_TOKEN_COUNTER_NO_AUTO_OPEN=1) if you prefer to skip that behaviour.
Docker
Build the image locally:
npm run docker:buildThen run it:
docker run --rm -p 9101:9101 \
-v ~/.codex-token-tracker:/logs \
codex-token-counter \
--scan-dir /logsMount whichever host directories hold your token_usage.jsonl files (including Codex rollout sessions) so the container can read them. You can also set environment variables (-e CODEX_TOKEN_COUNTER_SCAN_DIRS=/logs/.codex/sessions) exactly as you would when running the CLI directly.
CLI Usage
codex-token-counter [--port <port>] [--host <host>]
[--file <file>]... [--scan-dir <dir>]...
[--recent-limit <count>]
[--open | --no-open]--port / -p: Port for the HTTP server (defaults to9101, honours$PORT).--host: Bind address (defaults to0.0.0.0; use127.0.0.1to keep it local).--file / -f: Explicittoken_usage.jsonlfile(s) to aggregate (repeatable).--scan-dir: Directory to probe fortoken_usage.jsonl(repeatable, shallow scan).--recent-limit: Override the number of recent events returned by/api/metrics(default100).--open: Launch the dashboard in your default browser after the server starts (enabled by default when running in an interactive terminal).--no-open: Suppress the automatic browser launch (or setCODEX_TOKEN_COUNTER_NO_AUTO_OPEN=1).- Environment variables
CODEX_TOKEN_TRACKER_FILE,CODEX_TOKEN_TRACKER_DIR,CODEX_TOKEN_COUNTER_EXTRA_FILES, andCODEX_TOKEN_COUNTER_SCAN_DIRS(colon-separated) are honoured automatically. - Set
CODEX_TOKEN_COUNTER_DISABLE_AUTO_DISCOVERY=1to restrict the service to explicit paths (skips automatic scanning of.codex/sessionsand legacy tracker folders).
Example:
codex-token-counter --host 127.0.0.1 --port 9101 \
--file ~/.codex-token-tracker/token_usage.jsonl \
--scan-dir /var/log/codexDashboard & API
Open http://localhost:9101/ (or whichever host/port you specify) to view the live dashboard. The page auto-refreshes every few seconds, presenting high-level totals, cached vs non-cached prompt consumption, reasoning tokens, a “Top model” snapshot, and a quick “Adjust colors” palette button for tweaking the theme.
Jump to http://localhost:9101/detailed for the full leaderboards and recent activity feed.
On startup the CLI prints the exact URL and the log files being tracked. If you see a warning about “no token usage logs detected”, either run a Codex CLI session to generate rollout logs or point the counter at a file with --file / --scan-dir (or the matching environment variables).
Need to post numbers to Reddit or Slack? Hit the Copy summary (Markdown) button in the header—this generates a ready-to-share snippet with the latest totals and top models.
Heads-up for Codex CLI ≤ 0.46.0: the legacy
token_usage.jsonlfile doesn’t record model names, so costs fall back to the conservative default. Pass--scan-dir ~/.codex/sessions(or setCODEX_TOKEN_COUNTER_SCAN_DIRS) so the service ingests rollout JSONL logs—thesession_configuredevents inside those files include the model id, letting the counter price each model accurately.
GET /api/health→{ "status": "ok", "tracked_files": [...] }GET /api/metrics?limit=50→{ "summary": {...}, "recent": [...], "diagnostics": {...} }GET /api/metrics?limit=100&since=2025-01-01T00:00:00Z&until=2025-01-07T00:00:00Zfilters the aggregation window server-side.
Metrics Overview
The summary payload now includes:
totals.*counts (prompt, completion, reasoning, cached prompt, total, total_including_cached, requests)averages.*per-request breakdowns (prompt/completion/reasoning/total)window.*metadata (ISO since/until, duration in seconds and human-readable form, rolling requests-per-hour)totals.prompt_cost_usd,totals.completion_cost_usd, andtotals.total_cost_usdderived from the October 2025 OpenAI pricing table (see diagnostics.pricing_reference)
Both by_source and by_model mirror these fields for easy leaderboard rendering. The recent array contains most-recent-first events with the same per-event fields plus a pointer to the originating file and the resolved timestamp in milliseconds.
Any models that fall back to the generic pricing table are listed under diagnostics.pricing_reference.fallback_models so you can spot unrecognised names quickly. For static environments you can override pricing entirely via CODEX_TOKEN_COUNTER_PRICING_JSON (see “Advanced pricing overrides” below).
Advanced pricing overrides
Set the CODEX_TOKEN_COUNTER_PRICING_JSON environment variable to a JSON object of model -> { prompt, completion } (USD per million tokens). Example:
export CODEX_TOKEN_COUNTER_PRICING_JSON='{
"deepseek-coder-6.7b": { "prompt": 2.5, "completion": 7.5 },
"llama3.1:8b-instruct": { "prompt": 1.2, "completion": 3.6 }
}'Names are matched case-insensitively and by substring (e.g. “deepseek-coder-6.7b” in the override applies to DeepSeek-Coder-6.7B-Instruct-Q4_K_M events). Overrides take precedence over the built-in table; anything else still uses the fallback.
If your legacy logs lack model names entirely, either:
- Provide specific hints via
CODEX_TOKEN_COUNTER_MODEL_HINTS_JSON(same format as above, but mapping source/file substrings to canonical model names), or - Set
CODEX_TOKEN_COUNTER_DEFAULT_MODEL="gpt-5-codex"(or whichever model you want) to force unlabeled events to bill against that model.
Cost Calculation
Pricing defaults follow OpenAI's pricing as of October 2025. Rates are embedded for common Codex models (e.g. gpt-5-codex, gpt-5-high, gpt-4o, gpt-4o-mini) and a conservative fallback ($2 prompt / $4 completion per million tokens) is used when the model name is unrecognised. Diagnostics returned by /api/metrics include the applied pricing map so you can override or verify it.
Running as a Service
Keep the counter running in the background with systemd, pm2, Docker, or your platform’s native service manager. A templated user service lives at deploy/systemd/codex-token-counter.service; copy it to ~/.config/systemd/user/, tweak any Environment= lines you need (extra log paths, custom port), and run:
systemctl --user daemon-reload
systemctl --user enable --now codex-token-counter.serviceThe sample unit calls the globally installed codex-token-counter binary directly, so you can drop it onto any host once the npm package is installed.
On macOS or other Unix-like hosts without systemd, a process manager such as pm2, forever, or launchd (via a plist) can wrap the same CLI invocation. On Windows, pair the global install with the built-in sc.exe create command or a lightweight helper such as NSSM to register codex-token-counter --host 0.0.0.0 --port 9101 as a service. In containerized environments, use the Docker example above or add the CLI to an existing sidecar image—no extra dependencies are required beyond Node 18+.
Development
Install dependencies (none beyond Node), then run the smoke test against the bundled sample data:
node tests/smoke.jsOr via npm:
npm testThis spins up the HTTP server on a random port, calls both endpoints, validates the HTML dashboard, and verifies the totals.
Updating & Publishing
When you make changes and want to ship an update:
- Ensure the tree is clean (
git status) and all tests pass (npm test). - Bump the version (e.g.
npm version patch) so consumers know a new release is available. - Review npm's package metadata (
npm pkg fixwill address any formatting warnings). - Publish from the package root:
npm publish. - Push commits and tags to GitHub (
git push origin main --tags) so others can install vianpm install -g <github-url>#<tag>if they prefer git sources.
Repository Layout
bin/– CLI entry point used by npm/yarn.src/– Fast HTTP server and metrics collector.tests/– Lightweight smoke test + sample JSONL fixtures.deploy/systemd/– Example unit file for long-running deployments.README.md,LICENSE,.gitignore– ready for publishing to GitHub, Reddit, etc.
