ctxstuff
v2.0.1
Published
Pack codebases into LLM-ready context. Token counting, optimization, cost estimation. Built different.
Maintainers
Readme
ctxstuff
Pack codebases into LLM-ready context. Token counting, optimization, cost estimation. Built different.
Why ctxstuff?
When working with LLMs like GPT-5, Claude, or Llama, you need to pack your codebase into a format they can understand. ctxstuff does this perfectly:
- 📦 Smart packing - Respects
.gitignore, skips binaries, prioritizes important files - 🔢 Token counting - Estimate or accurate counts (PRO with tiktoken)
- 💰 Cost estimation - Know API costs before sending (PRO)
- ✂️ Context splitting - Split large codebases into chunks (PRO)
- 🔧 Optimization - Fit context to any token limit (PRO)
- 👁️ Watch mode - Auto-repack on file changes (PRO)
Installation
npm install -g ctxstuffQuick Start
# Pack current directory
ctxstuff pack .
# Pack and save to file
ctxstuff pack ./my-project -o context.md
# Copy to clipboard
ctxstuff pack . -c
# Count tokens
ctxstuff count ./src
# Compare models
ctxstuff compare ./srcCommands
FREE Commands
| Command | Description |
|---------|-------------|
| pack [dir] | Pack directory into LLM context |
| count [target] | Count tokens in file/directory |
| compare [target] | Compare token counts across models |
| license | Manage PRO license |
PRO Commands ⚡
| Command | Description |
|---------|-------------|
| optimize [dir] | Optimize context to fit token limit |
| split [dir] | Split context into manageable chunks |
| cost [target] | Estimate API costs |
| watch [dir] | Watch and repack on changes |
| profile [action] | Manage custom model profiles |
Usage Examples
Pack a Project
# Basic pack
ctxstuff pack ./my-project
# Only JavaScript/TypeScript files
ctxstuff pack ./src -e js,ts,jsx,tsx
# Ignore test files
ctxstuff pack . -i test,spec,mock
# Output as XML
ctxstuff pack . -f xml -o context.xml
# Show detailed stats
ctxstuff pack . -sCount Tokens
# Count tokens in a file
ctxstuff count ./src/index.js
# Count directory with breakdown
ctxstuff count ./src -b
# List all supported models
ctxstuff count --modelsCompare Models
# Compare against all models
ctxstuff compare ./src
# Compare specific models
ctxstuff compare ./src -m gpt-5-turbo,claude-5,claude-4.5-haikuPRO: Optimize Context
# Fit to 50K tokens
ctxstuff optimize ./src --tokens 50000
# Target specific model's context
ctxstuff optimize . --model gpt-5
# Keep comments
ctxstuff optimize . --keep-commentsPRO: Split Large Codebases
# Auto-split by tokens
ctxstuff split ./large-project
# Split by directory
ctxstuff split . --strategy by_directory
# Save chunks to files
ctxstuff split . -o ./chunks
# Get split suggestions
ctxstuff split . --suggestPRO: Cost Estimation
# Estimate cost for a model
ctxstuff cost ./src --model gpt-5-turbo
# Compare costs across models
ctxstuff cost ./src --compare
# Estimate with expected output
ctxstuff cost ./src --output 2000PRO: Watch Mode
# Watch and auto-repack
ctxstuff watch ./src -o context.md
# Copy to clipboard on change
ctxstuff watch ./src -cOutput Formats
- markdown (default) - GitHub-flavored markdown with syntax highlighting
- xml - XML with CDATA sections for content
- plain - Simple text format
- json - Structured JSON output
Supported Models
| Model | Context | Input $/1M | Output $/1M | |-------|---------|------------|-------------| | gpt-5 | 256K | $20.00 | $60.00 | | gpt-5-turbo | 128K | $5.00 | $15.00 | | gpt-4.5-turbo | 64K | $1.00 | $3.00 | | o3 | 200K | $10.00 | $40.00 | | claude-5 | 500K | $20.00 | $80.00 | | claude-4.5-opus | 300K | $15.00 | $75.00 | | claude-4.5-sonnet | 300K | $3.00 | $15.00 | | claude-4.5-haiku | 300K | $0.25 | $1.25 | | gemini-2.0-pro | 2M | $2.50 | $10.00 | | gemini-2.0-flash | 1M | $0.10 | $0.40 | | llama-4 | 128K | $0.50 | $1.50 |
FREE vs PRO
| Feature | FREE | PRO | |---------|------|-----| | pack, count, compare | ✓ | ✓ | | Operations per day | 10 | ∞ | | Files per pack | 20 | ∞ | | Max size | 500KB | ∞ | | Token counting | estimate | accurate (tiktoken) | | optimize command | ✗ | ✓ | | split command | ✗ | ✓ | | cost command | ✗ | ✓ | | watch command | ✗ | ✓ | | profile command | ✗ | ✓ | | .ctxignore support | ✗ | ✓ |
Get PRO
$14.99 one-time payment. No subscription.
# Purchase at
https://pnkd.dev/ctxstuff#pro
# Activate
ctxstuff activate CTX-XXXX-XXXX-XXXX-XXXXProgrammatic API
const { pack, count, format, cost } = require('ctxstuff');
// Pack a directory
const result = await pack('./my-project', {
extensions: ['js', 'ts'],
ignore: ['test'],
});
// Count tokens
const tokens = count(result.files, 'gpt-5-turbo');
console.log(`Total tokens: ${tokens.totalTokens}`);
// Format output
const markdown = format(result, 'markdown');
// Calculate cost
const pricing = cost(tokens.totalTokens, 1000, 'gpt-5-turbo');
console.log(`Estimated cost: $${pricing.totalCost.toFixed(4)}`);Configuration
.ctxignore (PRO)
Create a .ctxignore file to exclude paths:
# Comments start with #
*.test.js
*.spec.ts
__mocks__
fixtures/Custom Model Profiles (PRO)
# Add custom model
ctxstuff profile add --name my-model --context 32000 --input 5.00 --output 15.00
# List profiles
ctxstuff profile list
# Remove profile
ctxstuff profile remove --name my-modelTips
- Start with pack - See what your codebase looks like to an LLM
- Use compare - Find the cheapest model that fits your context
- Check token counts - Don't exceed model limits
- Use stats flag - Identify large files that might need trimming
Links
- Homepage: https://pnkd.dev/ctxstuff
- Issues: https://github.com/pnkd-dev/ctxstuff/issues
- PRO: https://pnkd.dev/ctxstuff#pro
More PRO Tools from pnkd.dev
- llmcache PRO - Cache LLM responses, cut costs by 90% ($18.99)
- aiproxy PRO - One API for GPT, Claude, Llama & more ($18.99)
pnkd.dev - built different.
License
MIT © pnkd.dev
