npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

token-limit

v1.6.0

Published

Monitor how many tokens your code and configs consume in AI tools. Set budgets and get alerts when limits are hit

Readme

Token Limit

Version Code Coverage GitHub License

Token Limit helps you monitor how many tokens your AI context files consume. Set token budgets for your prompts, documentation, and configs, then get alerts when limits are exceeded.

Keep your AI costs predictable and avoid hitting context window limits that break your applications.

Why

AI context files are becoming a standard part of modern development workflows. Projects now commonly include .context/, CLAUDE.md, .clinerules, .cursorrules, and other AI instruction files directly in their repositories.

As these files grow in size and complexity, it becomes crucial to monitor their token consumption to avoid unexpected API costs and context window limitations.

Key Features

  • Multi-model support for OpenAI GPT and Anthropic Claude
  • CI integration to catch budget overruns in pull requests
  • Flexible configuration for different AI use cases
  • Real token costs instead of inaccurate file sizes
  • Cost budgets in dollars and cents, not just tokens
  • Up-to-date pricing from OpenRouter API instead of hardcoded values

How It Works

  1. Configure your token budgets in token-limit.config.ts, package.json, or other supported formats
  2. Analyze files using official tokenizers for each AI model (tiktoken, Anthropic)
  3. Report which files exceed limits with detailed breakdowns
  4. Prevent costly overruns by failing CI builds when budgets are exceeded

Usage

Quick Start

  1. Install Token Limit:
npm install --save-dev token-limit
  1. Create a configuration file (e.g., token-limit.config.ts or .token-limit.json):
// token-limit.config.ts

import { defineConfig } from 'token-limit'

export default defineConfig([
  {
    name: 'AI Context',
    path: '.context/**/*.md',
    limit: '100k',
    model: 'gpt-4',
  },
  {
    name: 'Documentation',
    path: ['docs/**/*.md', 'docs/**/*.txt'],
    limit: '$0.05',
    model: 'claude-sonnet-4',
  },
])
  1. Add a script to your package.json:
{
  "scripts": {
    "token-limit": "token-limit"
  }
}
  1. Run the analysis:
npm run token-limit

Command Line Usage

You can also run Token Limit directly from the command line:

# Check specific files
npx token-limit README.md docs/guide.md

# Set limits and models
npx token-limit --limit 10k --model gpt-4 docs/**/*.md

# Set cost limits
npx token-limit --limit '$0.25' --model gpt-4 expensive-prompts/**/*.md

# Name your check
npx token-limit --name "API Docs" --limit 50k api-docs/**/*.md

# Multiple examples
npx token-limit .context/**/*.md
npx token-limit --limit 1000 claude.md
npx token-limit --limit '5c' --model gpt-3.5-turbo quick-prompts/*.txt
npx token-limit --json --hide-passed

Configuration

Token Limit supports multiple configuration formats to suit your project needs. You can define token limits, models, and file paths in a variety of ways:

Configuration Formats

  • token-limit.config.{ts,js,mjs,cjs}
  • .token-limit.{ts,js,mjs,cjs,json}
  • .token-limit
  • package.json (token-limit field)
  • Command line arguments

Supported Models

OpenAI Models

  • gpt-5
  • gpt-4.1
  • gpt-4.1-mini
  • gpt-4.1-nano
  • gpt-4o
  • gpt-4o-mini
  • gpt-4-turbo
  • gpt-4
  • gpt-3.5-turbo
  • o1
  • o3-mini

Anthropic Models

  • claude-opus-4
  • claude-opus-4.1
  • claude-opus-4.5
  • claude-sonnet-4.5
  • claude-haiku-4.5
  • claude-sonnet-4
  • claude-3.7-sonnet
  • claude-3.5-sonnet
  • claude-3.5-haiku
  • claude-3-opus

Limit Formats

Token Limits

  • Numbers: 1000, 50000
  • Human-readable: "10k", "1.5M", "500K"

Cost Limits

  • Dollar amounts: "$0.05", "$1.50"
  • Cents: "5c", "10c"
  • Plain numbers: 0.05, 1.5 (interpreted as dollars)

CI Integration

GitHub Actions Integration

Add Token Limit to your CI pipeline:

# .github/workflows/token-limit.yml

name: Token Limit
on: [push, pull_request]

jobs:
  token-limit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '18'
      - run: npx token-limit

Why Token Limits Matter

Unlike traditional bundle size limits, token limits directly impact:

  • API Costs: More tokens = higher bills (GPT-4 costs $0.03 per 1K tokens)
  • Response Quality: Exceeding context windows truncates input (GPT-4: 128K limit)
  • Performance: Larger contexts mean slower API responses
  • Reliability: Context overflow can cause API errors

Token Limit helps you catch these issues before they reach production.

Contributing

See Contributing Guide.

License

MIT © Azat S.