tokmeter
v0.1.0
Published
π’ Ultra-fast token counter for files and text - supports GPT models, Claude, and more!
Downloads
9
Maintainers
Readme
π’ tokmeter
Ultra-fast token counter for files and text - supports GPT models, Claude, and more!
tokmeter is the simplest, fastest, and most beautiful CLI tool for counting tokens in your files. Perfect for developers working with AI models, managing token budgets, and optimizing prompts.
β¨ Features
- β‘ Ultra-simple - No subcommands! Just
tokmeter file.txtortokmeter "text" - π Ultra-fast - Built on the fastest JavaScript tokenizer
- π― Accurate - Uses OpenAI's official tokenization (via gpt-tokenizer)
- π Beautiful output - Colorful, clear, and informative displays
- π Batch processing - Count tokens in files, directories, or entire projects
- ποΈ Flexible - Support for multiple AI models (GPT-4o, Claude, etc.)
- π° Cost estimation - See estimated API costs for your tokens
- π§ Customizable - Ignore patterns, file extensions, recursive scanning
- π Multiple formats - Human-readable or JSON output
- πͺΆ Lightweight - Minimal dependencies, fast installation
π Quick Start
# Install globally
npm install -g tokmeter
# Count tokens in files
tokmeter file1.txt file2.js
# Count tokens in directories
tokmeter ./src --recursive
# Count tokens in text
tokmeter "Hello, world!"
# Count tokens from stdin
echo "Hello, world!" | tokmeter
# See all options
tokmeter --helpπ¦ Installation
# Global installation (recommended)
npm install -g tokmeter
# Local installation
npm install tokmeter
# Use without installation
npx tokmeter count myfile.txtπ― Usage Examples
Count tokens in files
# Count tokens in a single file
tokmeter README.md
# Multiple files
tokmeter src/app.js src/utils.js
# Entire directory (non-recursive)
tokmeter ./src
# Recursive directory scanning
tokmeter ./src --recursive
# Specific file extensions only
tokmeter ./src --recursive --extensions ".js,.ts,.jsx"Count tokens in text
# Direct text input
tokmeter "Hello, world! How are you today?"
# From stdin
echo "Your text here" | tokmeter
# From file content
cat myfile.txt | tokmeterAdvanced usage
# Use different AI model
tokmeter ./docs --model gpt-4
# JSON output for automation
tokmeter ./src --json > tokens.json
# Summary only (no file details)
tokmeter ./large-project --summary
# Custom ignore patterns
tokmeter ./ --ignore "*.min.js,dist,build"
# Verbose output
tokmeter ./src --verbose
# List supported models
tokmeter --modelsπ€ Supported Models
# See all supported models
tokmeter --modelsCurrently supported:
- GPT models:
gpt-4o,gpt-4o-mini,gpt-4,gpt-4-turbo,gpt-3.5-turbo - Legacy models:
text-davinci-003,text-davinci-002 - Claude models:
claude-3-opus,claude-3-sonnet,claude-3-haiku
π Sample Output
$ tokmeter README.md src/app.js src/utils.js
π’ TOKMETER - File Token Counter
Using model: gpt-4o
ββββββββββββββββββββββββββββββββββββββββββββββββββ
π File Details:
/home/user/project/README.md
Tokens: 1,247 | Size: 4.2 KB
/home/user/project/src/app.js
Tokens: 892 | Size: 3.1 KB
/home/user/project/src/utils.js
Tokens: 445 | Size: 1.8 KB
π Summary:
Files processed: 3
Total tokens: 2,584
Estimated cost: $0.0065π οΈ CLI Reference
Usage
tokmeter [inputs...] [options]inputs: Files, directories, or text to count tokens in (reads from stdin if empty)
Options
| Option | Description | Default |
| ------------------------- | ---------------------------- | --------------------------------------- |
| -m, --model <model> | Model for tokenization | gpt-4o |
| -r, --recursive | Scan directories recursively | false |
| -e, --extensions <exts> | File extensions to include | Auto-detect |
| -i, --ignore <patterns> | Patterns to ignore | node_modules,*.min.js,.git,dist,build |
| -s, --summary | Show summary only | false |
| -j, --json | Output as JSON | false |
| -v, --verbose | Verbose output | false |
| --models | List supported models | - |
π§ Programmatic Usage
You can also use tokmeter as a library in your Node.js projects:
const { countTokensInText, countTokensInFiles } = require('tokmeter')
// Count tokens in text
const result = countTokensInText('Hello, world!', 'gpt-4o')
console.log(`Tokens: ${result.tokens}`)
// Count tokens in files
const fileResults = await countTokensInFiles(['./src'], {
model: 'gpt-4o',
recursive: true,
extensions: ['.js', '.ts']
})
console.log(`Total tokens: ${fileResults.summary.totalTokens}`)API Reference
countTokensInText(text, model)
- text
string- Text to count tokens in - model
string- Model to use (default: 'gpt-4o') - Returns
object- Result with token count, characters, and cost estimate
countTokensInFiles(paths, options)
- paths
string[]- Array of file or directory paths - options
object- Configuration optionsmodelstring- Model to userecursiveboolean- Scan directories recursivelyextensionsstring[]- File extensions to includeignorestring[]- Patterns to ignoreverboseboolean- Enable verbose output
- Returns
Promise<object>- Results with file details and summary
π‘ Use Cases
- AI Development - Count tokens before sending to APIs
- Cost Management - Estimate API costs for large documents
- Content Analysis - Analyze token distribution in codebases
- Prompt Engineering - Optimize prompts within token limits
- Documentation - Track documentation size and complexity
- Code Review - Understand token impact of changes
π¨ Why tokmeter?
| Feature | tokmeter | Others | | --------------- | ------------- | ------------- | | Speed | β‘ Ultra-fast | π Slow | | Output | π Beautiful | π Plain text | | Models | π€ 10+ models | π€ Limited | | Cost estimation | π° Built-in | β Missing | | File handling | π Advanced | π Basic | | CLI experience | β¨ Modern | π§ Basic |
π File Type Support
tokmeter automatically detects and processes these file types:
Programming: .js, .ts, .jsx, .tsx, .py, .java, .cpp, .c, .h, .go, .rs, .php, .rb, .swift, .kt, .scala, .cs, .vb
Web: .html, .css, .json, .xml
Data: .yaml, .yml, .sql
Documentation: .md, .txt
Scripts: .sh, .bash
Others: .r, .m
π€ Contributing
We love contributions! Here's how you can help:
- π Report bugs - Found an issue? Open an issue
- π‘ Suggest features - Have an idea? Start a discussion
- π§ Submit PRs - Fix bugs or add features
- π Improve docs - Help make our documentation better
- β Star us - Show your support!
π License
MIT Β© Sandy Mount
π Credits
- Built with gpt-tokenizer - The fastest JavaScript tokenizer
- Inspired by the need for simple, beautiful token counting tools
- Made with β€οΈ for the AI developer community
β If tokmeter helped you, please star the repo!
