npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ctxstuff

v2.0.1

Published

Pack codebases into LLM-ready context. Token counting, optimization, cost estimation. Built different.

Readme

ctxstuff

Pack codebases into LLM-ready context. Token counting, optimization, cost estimation. Built different.

npm version License: MIT

Why ctxstuff?

When working with LLMs like GPT-5, Claude, or Llama, you need to pack your codebase into a format they can understand. ctxstuff does this perfectly:

  • 📦 Smart packing - Respects .gitignore, skips binaries, prioritizes important files
  • 🔢 Token counting - Estimate or accurate counts (PRO with tiktoken)
  • 💰 Cost estimation - Know API costs before sending (PRO)
  • ✂️ Context splitting - Split large codebases into chunks (PRO)
  • 🔧 Optimization - Fit context to any token limit (PRO)
  • 👁️ Watch mode - Auto-repack on file changes (PRO)

Installation

npm install -g ctxstuff

Quick Start

# Pack current directory
ctxstuff pack .

# Pack and save to file
ctxstuff pack ./my-project -o context.md

# Copy to clipboard
ctxstuff pack . -c

# Count tokens
ctxstuff count ./src

# Compare models
ctxstuff compare ./src

Commands

FREE Commands

| Command | Description | |---------|-------------| | pack [dir] | Pack directory into LLM context | | count [target] | Count tokens in file/directory | | compare [target] | Compare token counts across models | | license | Manage PRO license |

PRO Commands ⚡

| Command | Description | |---------|-------------| | optimize [dir] | Optimize context to fit token limit | | split [dir] | Split context into manageable chunks | | cost [target] | Estimate API costs | | watch [dir] | Watch and repack on changes | | profile [action] | Manage custom model profiles |

Usage Examples

Pack a Project

# Basic pack
ctxstuff pack ./my-project

# Only JavaScript/TypeScript files
ctxstuff pack ./src -e js,ts,jsx,tsx

# Ignore test files
ctxstuff pack . -i test,spec,mock

# Output as XML
ctxstuff pack . -f xml -o context.xml

# Show detailed stats
ctxstuff pack . -s

Count Tokens

# Count tokens in a file
ctxstuff count ./src/index.js

# Count directory with breakdown
ctxstuff count ./src -b

# List all supported models
ctxstuff count --models

Compare Models

# Compare against all models
ctxstuff compare ./src

# Compare specific models
ctxstuff compare ./src -m gpt-5-turbo,claude-5,claude-4.5-haiku

PRO: Optimize Context

# Fit to 50K tokens
ctxstuff optimize ./src --tokens 50000

# Target specific model's context
ctxstuff optimize . --model gpt-5

# Keep comments
ctxstuff optimize . --keep-comments

PRO: Split Large Codebases

# Auto-split by tokens
ctxstuff split ./large-project

# Split by directory
ctxstuff split . --strategy by_directory

# Save chunks to files
ctxstuff split . -o ./chunks

# Get split suggestions
ctxstuff split . --suggest

PRO: Cost Estimation

# Estimate cost for a model
ctxstuff cost ./src --model gpt-5-turbo

# Compare costs across models
ctxstuff cost ./src --compare

# Estimate with expected output
ctxstuff cost ./src --output 2000

PRO: Watch Mode

# Watch and auto-repack
ctxstuff watch ./src -o context.md

# Copy to clipboard on change
ctxstuff watch ./src -c

Output Formats

  • markdown (default) - GitHub-flavored markdown with syntax highlighting
  • xml - XML with CDATA sections for content
  • plain - Simple text format
  • json - Structured JSON output

Supported Models

| Model | Context | Input $/1M | Output $/1M | |-------|---------|------------|-------------| | gpt-5 | 256K | $20.00 | $60.00 | | gpt-5-turbo | 128K | $5.00 | $15.00 | | gpt-4.5-turbo | 64K | $1.00 | $3.00 | | o3 | 200K | $10.00 | $40.00 | | claude-5 | 500K | $20.00 | $80.00 | | claude-4.5-opus | 300K | $15.00 | $75.00 | | claude-4.5-sonnet | 300K | $3.00 | $15.00 | | claude-4.5-haiku | 300K | $0.25 | $1.25 | | gemini-2.0-pro | 2M | $2.50 | $10.00 | | gemini-2.0-flash | 1M | $0.10 | $0.40 | | llama-4 | 128K | $0.50 | $1.50 |

FREE vs PRO

| Feature | FREE | PRO | |---------|------|-----| | pack, count, compare | ✓ | ✓ | | Operations per day | 10 | ∞ | | Files per pack | 20 | ∞ | | Max size | 500KB | ∞ | | Token counting | estimate | accurate (tiktoken) | | optimize command | ✗ | ✓ | | split command | ✗ | ✓ | | cost command | ✗ | ✓ | | watch command | ✗ | ✓ | | profile command | ✗ | ✓ | | .ctxignore support | ✗ | ✓ |

Get PRO

$14.99 one-time payment. No subscription.

# Purchase at
https://pnkd.dev/ctxstuff#pro

# Activate
ctxstuff activate CTX-XXXX-XXXX-XXXX-XXXX

Programmatic API

const { pack, count, format, cost } = require('ctxstuff');

// Pack a directory
const result = await pack('./my-project', {
  extensions: ['js', 'ts'],
  ignore: ['test'],
});

// Count tokens
const tokens = count(result.files, 'gpt-5-turbo');
console.log(`Total tokens: ${tokens.totalTokens}`);

// Format output
const markdown = format(result, 'markdown');

// Calculate cost
const pricing = cost(tokens.totalTokens, 1000, 'gpt-5-turbo');
console.log(`Estimated cost: $${pricing.totalCost.toFixed(4)}`);

Configuration

.ctxignore (PRO)

Create a .ctxignore file to exclude paths:

# Comments start with #
*.test.js
*.spec.ts
__mocks__
fixtures/

Custom Model Profiles (PRO)

# Add custom model
ctxstuff profile add --name my-model --context 32000 --input 5.00 --output 15.00

# List profiles
ctxstuff profile list

# Remove profile
ctxstuff profile remove --name my-model

Tips

  1. Start with pack - See what your codebase looks like to an LLM
  2. Use compare - Find the cheapest model that fits your context
  3. Check token counts - Don't exceed model limits
  4. Use stats flag - Identify large files that might need trimming

Links

  • Homepage: https://pnkd.dev/ctxstuff
  • Issues: https://github.com/pnkd-dev/ctxstuff/issues
  • PRO: https://pnkd.dev/ctxstuff#pro

More PRO Tools from pnkd.dev

  • llmcache PRO - Cache LLM responses, cut costs by 90% ($18.99)
  • aiproxy PRO - One API for GPT, Claude, Llama & more ($18.99)

pnkd.dev - built different.

License

MIT © pnkd.dev