npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, πŸ‘‹, I’m Ryan HefnerΒ  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you πŸ™

Β© 2025 – Pkg Stats / Ryan Hefner

tokmeter

v0.1.0

Published

πŸ”’ Ultra-fast token counter for files and text - supports GPT models, Claude, and more!

Downloads

9

Readme

πŸ”’ tokmeter

Ultra-fast token counter for files and text - supports GPT models, Claude, and more!

npm version License: MIT Node.js

tokmeter is the simplest, fastest, and most beautiful CLI tool for counting tokens in your files. Perfect for developers working with AI models, managing token budgets, and optimizing prompts.

✨ Features

  • ⚑ Ultra-simple - No subcommands! Just tokmeter file.txt or tokmeter "text"
  • πŸš€ Ultra-fast - Built on the fastest JavaScript tokenizer
  • 🎯 Accurate - Uses OpenAI's official tokenization (via gpt-tokenizer)
  • 🌈 Beautiful output - Colorful, clear, and informative displays
  • πŸ“ Batch processing - Count tokens in files, directories, or entire projects
  • πŸŽ›οΈ Flexible - Support for multiple AI models (GPT-4o, Claude, etc.)
  • πŸ’° Cost estimation - See estimated API costs for your tokens
  • πŸ”§ Customizable - Ignore patterns, file extensions, recursive scanning
  • πŸ“Š Multiple formats - Human-readable or JSON output
  • πŸͺΆ Lightweight - Minimal dependencies, fast installation

πŸš€ Quick Start

# Install globally
npm install -g tokmeter

# Count tokens in files
tokmeter file1.txt file2.js

# Count tokens in directories
tokmeter ./src --recursive

# Count tokens in text
tokmeter "Hello, world!"

# Count tokens from stdin
echo "Hello, world!" | tokmeter

# See all options
tokmeter --help

πŸ“¦ Installation

# Global installation (recommended)
npm install -g tokmeter

# Local installation
npm install tokmeter

# Use without installation
npx tokmeter count myfile.txt

🎯 Usage Examples

Count tokens in files

# Count tokens in a single file
tokmeter README.md

# Multiple files
tokmeter src/app.js src/utils.js

# Entire directory (non-recursive)
tokmeter ./src

# Recursive directory scanning
tokmeter ./src --recursive

# Specific file extensions only
tokmeter ./src --recursive --extensions ".js,.ts,.jsx"

Count tokens in text

# Direct text input
tokmeter "Hello, world! How are you today?"

# From stdin
echo "Your text here" | tokmeter

# From file content
cat myfile.txt | tokmeter

Advanced usage

# Use different AI model
tokmeter ./docs --model gpt-4

# JSON output for automation
tokmeter ./src --json > tokens.json

# Summary only (no file details)
tokmeter ./large-project --summary

# Custom ignore patterns
tokmeter ./ --ignore "*.min.js,dist,build"

# Verbose output
tokmeter ./src --verbose

# List supported models
tokmeter --models

πŸ€– Supported Models

# See all supported models
tokmeter --models

Currently supported:

  • GPT models: gpt-4o, gpt-4o-mini, gpt-4, gpt-4-turbo, gpt-3.5-turbo
  • Legacy models: text-davinci-003, text-davinci-002
  • Claude models: claude-3-opus, claude-3-sonnet, claude-3-haiku

πŸ“Š Sample Output

$ tokmeter README.md src/app.js src/utils.js

πŸ”’ TOKMETER - File Token Counter
Using model: gpt-4o
──────────────────────────────────────────────────

πŸ“„ File Details:
  /home/user/project/README.md
    Tokens: 1,247 | Size: 4.2 KB
  /home/user/project/src/app.js
    Tokens: 892 | Size: 3.1 KB
  /home/user/project/src/utils.js
    Tokens: 445 | Size: 1.8 KB

πŸ“Š Summary:
  Files processed: 3
  Total tokens: 2,584
  Estimated cost: $0.0065

πŸ› οΈ CLI Reference

Usage

tokmeter [inputs...] [options]

inputs: Files, directories, or text to count tokens in (reads from stdin if empty)

Options

| Option | Description | Default | | ------------------------- | ---------------------------- | --------------------------------------- | | -m, --model <model> | Model for tokenization | gpt-4o | | -r, --recursive | Scan directories recursively | false | | -e, --extensions <exts> | File extensions to include | Auto-detect | | -i, --ignore <patterns> | Patterns to ignore | node_modules,*.min.js,.git,dist,build | | -s, --summary | Show summary only | false | | -j, --json | Output as JSON | false | | -v, --verbose | Verbose output | false | | --models | List supported models | - |

πŸ”§ Programmatic Usage

You can also use tokmeter as a library in your Node.js projects:

const { countTokensInText, countTokensInFiles } = require('tokmeter')

// Count tokens in text
const result = countTokensInText('Hello, world!', 'gpt-4o')
console.log(`Tokens: ${result.tokens}`)

// Count tokens in files
const fileResults = await countTokensInFiles(['./src'], {
  model: 'gpt-4o',
  recursive: true,
  extensions: ['.js', '.ts']
})
console.log(`Total tokens: ${fileResults.summary.totalTokens}`)

API Reference

countTokensInText(text, model)

  • text string - Text to count tokens in
  • model string - Model to use (default: 'gpt-4o')
  • Returns object - Result with token count, characters, and cost estimate

countTokensInFiles(paths, options)

  • paths string[] - Array of file or directory paths
  • options object - Configuration options
    • model string - Model to use
    • recursive boolean - Scan directories recursively
    • extensions string[] - File extensions to include
    • ignore string[] - Patterns to ignore
    • verbose boolean - Enable verbose output
  • Returns Promise<object> - Results with file details and summary

πŸ’‘ Use Cases

  • AI Development - Count tokens before sending to APIs
  • Cost Management - Estimate API costs for large documents
  • Content Analysis - Analyze token distribution in codebases
  • Prompt Engineering - Optimize prompts within token limits
  • Documentation - Track documentation size and complexity
  • Code Review - Understand token impact of changes

🎨 Why tokmeter?

| Feature | tokmeter | Others | | --------------- | ------------- | ------------- | | Speed | ⚑ Ultra-fast | 🐌 Slow | | Output | 🌈 Beautiful | πŸ“ Plain text | | Models | πŸ€– 10+ models | πŸ€– Limited | | Cost estimation | πŸ’° Built-in | ❌ Missing | | File handling | πŸ“ Advanced | πŸ“„ Basic | | CLI experience | ✨ Modern | πŸ”§ Basic |

πŸ” File Type Support

tokmeter automatically detects and processes these file types:

Programming: .js, .ts, .jsx, .tsx, .py, .java, .cpp, .c, .h, .go, .rs, .php, .rb, .swift, .kt, .scala, .cs, .vb

Web: .html, .css, .json, .xml

Data: .yaml, .yml, .sql

Documentation: .md, .txt

Scripts: .sh, .bash

Others: .r, .m

🀝 Contributing

We love contributions! Here's how you can help:

  1. πŸ› Report bugs - Found an issue? Open an issue
  2. πŸ’‘ Suggest features - Have an idea? Start a discussion
  3. πŸ”§ Submit PRs - Fix bugs or add features
  4. πŸ“– Improve docs - Help make our documentation better
  5. ⭐ Star us - Show your support!

πŸ“„ License

MIT Β© Sandy Mount

πŸ™ Credits

  • Built with gpt-tokenizer - The fastest JavaScript tokenizer
  • Inspired by the need for simple, beautiful token counting tools
  • Made with ❀️ for the AI developer community

⭐ If tokmeter helped you, please star the repo!

GitHub stars Twitter