tokenu
v1.0.0
Published
A Unix du-like command line tool to count token usage per files and directories
Maintainers
Readme
Quick Start
No install needed — run it directly with npx:
npx tokenu .Or install it globally:
npm install -g tokenuUsage
tokenu [options] [path...]Options
| Flag | Description |
|---|---|
| -s, --summarize | Display only a total for each argument |
| -h, --human-readable | Print token counts in human-readable format (1K, 1M) |
| -a, --all | Show counts for all files, not just directories |
| -d, --max-depth <N> | Print totals only for directories N levels deep |
| -c, --total | Produce a grand total |
| --json | Output as JSON (for AI agent consumption) |
| --encoding <enc> | Tokenizer encoding (default: o200k_base) |
| --model <name> | Model name (e.g. gpt-4o, gpt-3.5-turbo) |
| --exclude <pat> | Glob pattern to exclude (repeatable) |
Supported encodings: o200k_base, o200k_harmony, cl100k_base, p50k_base, p50k_edit, r50k_base
Examples
Recursive token counts per directory:
$ tokenu -a src/
1127 src/bin/cli.ts
1127 src/bin
783 src/formatter.ts
256 src/main.ts
324 src/tokenizer.ts
165 src/types.ts
735 src/walker.ts
3390 srcHuman-readable summary:
$ tokenu -hs src/
3.4K srcDepth-limited with grand total:
$ tokenu -d 1 --total myproject/
4143 myproject/tests
2019 myproject/docs
2263 myproject/src
12241 myproject/dist
20666 myproject
20666 totalJSON output (useful for piping to other tools or AI agents):
$ tokenu -a --json src/Use a specific encoding for older models:
$ tokenu --encoding cl100k_base .Contributing
Please consult CONTRIBUTING for guidelines on contributing to this project.
Author
tokenu © Liran Tal, Released under the Apache-2.0 License.
