@binux/llm-cost
v1.0.0
Published
π° Beautiful CLI to count tokens and estimate costs for any LLM model (Claude, GPT, Gemini, Mistral, and 300+ more). Auto-updates prices from OpenRouter.
Maintainers
Readme
llm-cost π°
Count tokens and estimate costs for any LLM model β right in your terminal.
300+ models Β· Auto-updated prices Β· Zero dependencies Β· Beautiful CLI
β¨ Features
- π’ Accurate token counting β BPE-approximation, ~95% accuracy
- π΅ Real-time pricing β auto-fetches latest prices from OpenRouter on every run
- π Live Benchmarks β pulls daily ELO ratings directly from LMSYS Chatbot Arena
- π Model Recommendations β smart match by category (code, chat, agents) and cost efficiency
- π¨ Beautiful interactive UI β colorized TUI with budget calculator and filtering
- π¦ Zero dependencies β pure Node.js, nothing to install
- π Pipe-friendly β works with
cat,echo, stdin - π Library API β use programmatically in your own projects
Supported Providers
| Provider | Models | |----------|--------| | Anthropic | Claude Opus 4.5, Sonnet 4.5, Haiku 4.5, 3.7 Sonnet, 3.5 Sonnet/Haiku, 3 Opus | | OpenAI | GPT-4.1, GPT-4.1 Mini/Nano, GPT-4o, GPT-4o Mini, o3, o4-mini | | Google | Gemini 2.5 Pro/Flash/Flash-Lite, 2.0 Flash | | Meta | Llama 3.3 70B, 3.1 405B, 3.1 8B | | Mistral | Mistral Large, Small, Codestral | | xAI | Grok 3, Grok 3 Mini | | DeepSeek | DeepSeek R1, Chat V3 | | Cohere | Command R+, Command R | | + 300 more via auto-update from OpenRouter |
π Install
npm install -g llm-costOr use without installing:
npx llm-costRequires Node.js 18+.
π Usage
Interactive Mode (default)
Just run llm-cost with no arguments β perfect for beginners:
llm-costIt will:
- Show a welcome screen with model count
- Let you type/paste your prompt
- Count tokens in real time
- Ask for expected output length
- Show a filterable comparison table
Quick Mode
# Count tokens and show costs for top models
llm-cost "Explain quantum computing in simple terms"
# Specific model
llm-cost "Hello world" -m claude-haiku
# Full comparison across ALL models
llm-cost "Write a 1000-word essay about AI" --compare
# Set expected output tokens
llm-cost "Translate this article" -o 2000Pipe Mode
# From a file
cat my-prompt.txt | llm-cost
# From echo
echo "Hello, how are you?" | llm-cost
# From clipboard (macOS)
pbpaste | llm-costList Models
# All models with prices
llm-cost --list
# Filter by provider
llm-cost --list anthropic
llm-cost --list openai
llm-cost --list googleπ¦ Programmatic API
const { countTokens, getModels, estimateCost, estimateAll } = require('llm-cost');
// Count tokens
const tokens = countTokens('Hello, world!');
console.log(tokens); // ~4
// Get all models (auto-fetches latest prices)
const models = await getModels();
// Estimate cost for one model
const cost = estimateCost({
inputTokens: 1000,
outputTokens: 500,
model: models[0],
});
console.log(cost); // { inputCost, outputCost, totalCost }
// Compare all models at once
const { results } = await estimateAll({
text: 'Your prompt here',
outputTokens: 500,
});
// results sorted cheapest-firstπ Auto-Update
Prices are automatically fetched from OpenRouter's public API on every run and cached for 1 hour. If you're offline, built-in fallback prices are used.
No API key required. No account needed.
π CLI Reference
| Option | Description |
|--------|-------------|
| (no args) | Interactive mode |
| "text" | Quick calc for top models |
| -m, --model <name> | Calculate for a specific model (partial name OK) |
| -o, --output <n> | Expected output tokens (default: auto) |
| -c, --compare | Compare ALL models |
| -l, --list [provider] | List models with prices |
| -h, --help | Show help |
| -v, --version | Show version |
π€ Contributing
Contributions are welcome! Feel free to:
- Add new model providers
- Improve token counting accuracy
- Suggest UI improvements
git clone https://github.com/YOUR_USERNAME/llm-cost.git
cd llm-cost
npm testπ License
MIT β use it however you want.
