@reals3bi/ai-cost
v1.1.1
Published
AI cost tracker with hosted backend, CLI, and simple web dashboard.
Readme
AI Cost Tracker
Hosted backend + CLI + simple web UI to track:
- OpenAI Codex limits from local session logs, with a server-side fallback cache
- OpenAI API spend from the official organization costs endpoint, with optional monthly budget math
- OpenRouter balance (official credits endpoint)
- Cursor billing usage from the Cursor dashboard API (session cookie required)
Quick start
- Install dependencies:
pnpm install- Create secrets and config:
pnpm hash-password "your-password"
cp .env.example .env- Fill
.envvalues and start the server:
pnpm dev- Open web UI:
http://localhost:3000/login
- Initialize the CLI on the machine where you want to use it:
pnpm build
node dist/src/cli.js init --url http://localhost:3000ai-cost init writes the machine-local CLI config to ~/.ai-cost/config.env by default. That is where local Codex/Cursor settings live for the current PC.
Server hosting
Two production paths are prepared in this repo:
compose.yamlfor Docker Composedeploy/ai-cost.servicefor a plain Node.js + systemd deployment
Option 1: Docker Compose
- Create the production env file and generate secrets:
cp .env.prod.example .env.prod
pnpm hash-password "your-password"Fill
.env.prodwith the real values.Start the service. The Docker build validates the envs passed to Compose, so missing secrets fail during image build instead of only at container startup:
docker compose --env-file .env.prod up -d --build- Verify health:
curl http://127.0.0.1:3000/api/healthThe Compose stack persists fallback cache data in the named volume ai-cost-data, mounted at /data inside the container. The Compose file also sets APP_DATA_DIR=/data for the service.
The image pre-creates /data for the unprivileged node user so fresh deployments can write the snapshot cache without permission errors. If an older deployment already created the volume with root ownership, reset that volume's ownership once or recreate it after upgrading.
Exact envs for .env.prod / Coolify:
NODE_ENV=productionHOST=0.0.0.0PORT=3000APP_BIND_PORT=3000APP_PASSWORD_HASH=...APP_SESSION_SECRET=...APP_TOKEN_SECRET=...APP_SECURE_COOKIE=trueCLI_TOKEN_TTL_SECONDS=2592000PROVIDER_TIMEOUT_MS=10000OPENAI_API_KEY=...OPENAI_ORG_ID=...OPENAI_MONTHLY_BUDGET_USD=100(optional)OPENROUTER_API_KEY=...CURSOR_DASHBOARD_COOKIE=...CURSOR_TEAM_ID=-1CODEX_HOME=...only if the backend container itself should read Codex logs
Optional for a backend-side Codex bind mount:
CODEX_HOST_PATH=/absolute/path/to/.codexCoolify note: use the same keys from .env.prod in the Coolify environment UI. The prepared Compose stack no longer depends on a checked-in .env.prod file inside the cloned repo; Coolify can inject the values directly during build and runtime. Keep the service data path fixed at /data; the persistent storage is provided by the ai-cost-data volume mapping in compose.yaml. The Dockerfile intentionally uses a Debian-based Node image instead of Alpine so native modules such as the session crypto dependency can load reliably in production.
If you want Codex usage from inside the backend container, add a bind mount from CODEX_HOST_PATH to a container path like /codex and set CODEX_HOME=/codex. In the common hosted setup you usually leave CODEX_HOME empty and let each CLI read Codex locally on its own machine.
Option 2: Node.js + systemd
Install Node.js 22+ and
pnpmon the server.Build the app:
pnpm install --frozen-lockfile
pnpm buildpnpm build now runs the same required-env validation as server startup, using .env in the repo root by default via dotenv.
Copy
deploy/ai-cost.serviceto/etc/systemd/system/ai-cost.serviceand adjust paths/user if needed.Enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable --now ai-cost
sudo systemctl status ai-costReverse proxy
Run the app behind HTTPS via Nginx, Caddy, or Traefik and proxy requests to 127.0.0.1:3000.
Recommended production setup:
- Keep
APP_SECURE_COOKIE=true - Leave
HOST=0.0.0.0 - Expose only the reverse proxy publicly
- Use
/api/healthfor uptime checks
CLI usage
Build once, then run:
pnpm build
node dist/src/cli.js init --url http://localhost:3000
node dist/src/cli.js
node dist/src/cli.js --json
node dist/src/cli.js --version
node dist/src/cli.js env download
node dist/src/cli.js env upload
node dist/src/cli.js update
node dist/src/cli.js cursor
node dist/src/cli.js codex
node dist/src/cli.js openai
node dist/src/cli.js openrouter
node dist/src/cli.js cursor-cookie --value "WorkosCursorSessionToken=..."
Get-Clipboard | node dist/src/cli.js cursor-cookie
node dist/src/cli.js login --url http://localhost:3000
node dist/src/cli.js logoutAfter global install (pnpm link --global), use ai-cost directly.
ai-cost init stores the backend URL plus a machine-local env file path and writes provider values such as CODEX_HOME, CURSOR_DASHBOARD_COOKIE, OPENAI_API_KEY, and OPENROUTER_API_KEY into ~/.ai-cost/config.env by default.
When you run ai-cost, local provider values override the hosted backend for that machine. This matters especially for Codex: the CLI reads CODEX_HOME on the current PC, so Codex limits still work correctly after installing the CLI on another computer.
ai-cost cursor reads the local CURSOR_DASHBOARD_COOKIE config directly and prints detailed Cursor billing data, including the current usage mix and top models.
ai-cost codex reads the local Codex session files and fallback cache and prints the detailed rate-limit windows plus technical source metadata.
ai-cost openai reads the local OpenAI API settings and prints detailed current-month billing data for the openai-api provider.
ai-cost openrouter reads the local OpenRouter API key and prints detailed credits data, including per-key limit metadata when the endpoint returns it.
ai-cost env download pulls the syncable provider env values from the backend into the local CLI env file, and ai-cost env upload pushes the local syncable values back to the backend.
ai-cost update installs the latest published npm version globally, and ai-cost --version prints the installed CLI version.
The web dashboard mirrors these provider-specific details in dedicated detail cards below the overview table.
cursor-cookie extracts WorkosCursorSessionToken from a pasted cookie header, a copied curl command, or a raw token value and writes CURSOR_DASHBOARD_COOKIE into the local CLI env file. Use --stdout if you only want the extracted token printed.
Install from npm
After publish, install globally:
npm install -g @reals3bi/ai-costUpdate later:
ai-cost updateThen use:
ai-cost init --url https://your-backend.example.com
ai-cost
ai-cost --version
ai-cost env download
ai-cost codex
ai-cost openai
ai-cost openrouterPublish to npm
- Login once:
npm login- Deploy the current CLI build:
pnpm deploypnpm deploy runs:
pnpm testpnpm buildnpm whoaminpm pack --dry-runnpm publish --access public
If the version already exists on npm, bump it first with npm version patch|minor|major.
Environment variables
See .env.example. Required core values:
APP_PASSWORD_HASHAPP_SESSION_SECRETAPP_TOKEN_SECRET
Provider values:
OPENAI_API_KEYOPENAI_MONTHLY_BUDGET_USD(optional, only needed to deriveremaining = budget - current month costs)OPENROUTER_API_KEYCODEX_HOME(optional on the backend; the CLI reads~/.codexlocally by default)APP_DATA_DIR(optional, defaults to~/.ai-coston the current machine for Codex fallback cache files)CURSOR_DASHBOARD_COOKIE(fullCookieheader value or justWorkosCursorSessionToken)CURSOR_TEAM_ID(defaults to-1for personal usage)
For production hosting, use .env.prod.example as the basis for .env.prod.
CODEX_HOME only works on the machine that can actually read the Codex session files for the account whose limits you want to display.
For OpenAI API, the documented endpoint used here reports organization costs for the current month. If OPENAI_MONTHLY_BUDGET_USD is set, the tracker also derives a remaining value and monthly reset; if it is empty, the tracker still shows spend but leaves limit/remaining/reset blank.
If no fresh local Codex session is found, the app falls back to the last snapshot stored in APP_DATA_DIR/codex-cache.json. Fresh local Codex snapshots replace that cache only when the value actually changes.
CURSOR_DASHBOARD_COOKIE is taken from an authenticated browser session on https://cursor.com/dashboard?tab=billing. The local Cursor app's cached accessToken and refreshToken are not enough for the dashboard endpoints by themselves; the tracker uses the web session cookie instead.
Security notes
- Provider API keys stay on the backend only.
CURSOR_DASHBOARD_COOKIEis a sensitive web session and should be treated like a secret.- Web auth uses secure session cookie.
- CLI auth uses signed backend token.
- CLI token storage prefers OS keychain via
keytar; encrypted local fallback is used if keychain is unavailable.
