npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llmeter

v0.2.2

Published

Local token usage monitor for Claude Code and Codex

Readme

llmeter

Local live token-usage monitor for Claude Code and Codex.

llmeter reads the JSONL session logs that Claude Code and Codex already write, stores usage in SQLite, and serves a local dashboard at http://127.0.0.1:4001.

It does not require API key changes, shell aliases, or wrapping your editor.

llmeter dashboard

llmeter session detail

What Works Today

llmeter currently supports:

  • Claude Code: ~/.claude/projects/**/*.jsonl
  • Codex: ~/.codex/sessions/**/*.jsonl

It can be expanded to other LLM tools later. The intended expansion path is a thin monitoring layer that can sit above provider-normalization systems like LiteLLM, while keeping the dashboard and storage model local.

Install

Requirements: macOS, Python 3.11 or newer, and either Claude Code or Codex.

npx llmeter install

Then open:

http://127.0.0.1:4001

That is it. The installer:

  • copies llmeter into ~/.llmeter/app
  • creates ~/.llmeter/app/.venv
  • installs pinned Python dependencies from requirements.txt (and requirements-menubar.txt on macOS)
  • writes a launchd service for the dashboard and a LaunchAgent for the menu bar app
  • starts both now and on future logins
  • opens the dashboard

Logs are written to:

~/.llmeter/logs/llmeter.log

The SQLite database is written to:

~/.llmeter/app/data/llmeter.db

To install the bleeding-edge main branch instead of the latest npm release:

npx github:jawnty/llmeter install

Using llmeter

The dashboard shows:

  • today's total tokens, fresh input, cache-read tokens, turns, and reference API cost
  • Claude Code vs. Codex token split
  • hourly token bars in your local timezone
  • session list with project, opening prompt, models, turns, total tokens, fresh input, and cache-read tokens
  • per-turn details with input, output, cache-read, cache-create, and total tokens
  • live updates through server-sent events

The cost number is a reference estimate only. It uses approximate published API prices so you can spot expensive sessions. It is not your real bill, especially if you use subscription products.

Menu Bar App (macOS)

npx llmeter install now installs both the dashboard and the menu bar app. There is nothing extra to do — after install you should see a icon in the macOS menu bar and the dashboard at http://127.0.0.1:4001.

The menu bar app shows:

  • compact token count in the menu bar (e.g. ⚡ 1.2M)
  • today's total tokens, fresh input, cache-read tokens, Claude vs Codex split, reference cost
  • last session summary (project · turns · tokens)
  • "Open dashboard" → http://127.0.0.1:4001
  • "Refresh now", "Quit"

It is a read-only client of the same SQLite database the dashboard writes to. Ingest stays in the dashboard's launchd service. The menu bar app polls the database every 5 seconds (override with LLMETER_MENUBAR_REFRESH_SEC).

The installer:

  • creates a separate venv at ~/.llmeter/menubar-venv
  • installs requirements-menubar.txt (rumps)
  • writes a launchd LaunchAgent at ~/Library/LaunchAgents/com.llmeter.menubar.plist that runs python -m llmeter.menubar from the installed app tree
  • removes any older /Applications/Llmeter.app test bundle so stale py2app launch errors cannot survive an upgrade
  • launches it now via launchctl kickstart

Install flags:

  • --no-menubar — install only the dashboard.
  • --menubar-only — install only the menu bar app. The dashboard service currently owns ingest, so without it the menu bar will show stale data. Tradeoff documented in SPEC.md.

npx llmeter uninstall removes the menu bar LaunchAgent plist, menubar venv, any older test bundle, dashboard service, and ~/.llmeter together.

npx llmeter status reports both the dashboard service and the menu bar app.

Development (manual)

If you are hacking on the menu bar code without going through the npm installer:

cd /path/to/llmeter
python3 -m venv .venv-menubar
. .venv-menubar/bin/activate
pip install -r requirements.txt -r requirements-menubar.txt

python -m llmeter.menubar

See SPEC.md for the full design rationale.

Stop Or Restart

Stop:

npx llmeter stop

Start:

npx llmeter start

Restart:

npx llmeter stop
npx llmeter start

Check status:

npx llmeter status

Remove the installed app:

npx llmeter uninstall

Configuration

Most users do not need any configuration. llmeter infers paths from the checkout location and from the standard Claude Code and Codex log directories.

Advanced overrides:

| Variable | Default | Purpose | | --- | --- | --- | | LLMETER_HOST | 127.0.0.1 | bind address | | LLMETER_PORT | 4001 | dashboard port | | LLMETER_DB_PATH | data/llmeter.db | SQLite database path | | LLMETER_DATA_DIR | data | database directory when LLMETER_DB_PATH is unset | | LLMETER_LOG_DIR | ~/.llmeter/logs | launchd log directory used by the installer | | LLMETER_CLAUDE_GLOB | ~/.claude/projects/**/*.jsonl | Claude Code log glob | | LLMETER_CODEX_GLOB | ~/.codex/sessions/**/*.jsonl | Codex log glob |

Example:

LLMETER_PORT=4010 bash scripts/install.sh

LiteLLM And Security

llmeter's v1 ingestion path for Claude Code and Codex reads local log files. It does not proxy those tools through LiteLLM.

The broader design treats llmeter as the dashboard/storage layer above local LLM tooling. For tools that need a proxy or provider-normalization layer, the expected expansion path is a LiteLLM-backed ingestion source. That LiteLLM path should use exact pinned versions, not floating installs.

LiteLLM has had security-sensitive issues, so be conservative: pin versions, watch upstream advisories, do not expose proxies publicly, and run it at your own risk. Pinning reduces supply-chain drift, but it does not make any proxy automatically safe.

Development

Set up dependencies:

git clone https://github.com/jawnty/llmeter.git
cd llmeter
python3 -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt

Run locally:

python -m llmeter

Run tests:

pytest

Test the npm wrapper from the checkout:

node bin/llmeter.js --help
npm pack --dry-run

Data Model

llmeter stores:

  • sessions: source, project, working directory, opening prompt, models
  • turns: timestamp, token counts including fresh input and cache tokens, local day/hour bucket, reference cost
  • file_offsets: last ingested byte offset for each JSONL file

The database is local SQLite. No usage data is sent anywhere by llmeter.

Roadmap

  • Gemini CLI ingestion
  • multi-day comparison views
  • project-level rollups
  • richer cache/fresh-input trend dashboard
  • model-tier suggestions
  • optional LiteLLM-backed ingestion for tools that need a proxy layer