pnkd-llmcache
v2.0.1
Published
Cache LLM responses. Save tokens, save money. PRO features: cost tracking, semantic search, compression.
Maintainers
Readme
llmcache
Cache LLM responses. Save tokens, save money.
Installation
npm install -g pnkd-llmcacheQuick Start
# Initialize cache
llmcache init
# Cache a response
llmcache set "What is AI?" "AI is artificial intelligence..."
# Get cached response
llmcache get "What is AI?"
# View statistics
llmcache statsCommands
FREE Commands
| Command | Description |
|---------|-------------|
| init | Initialize a new cache |
| set <prompt> <response> | Cache a prompt/response pair |
| get <prompt> | Get cached response |
| list | List cached entries |
| stats | Show cache statistics |
| clear | Clear cache entries |
| search <query> | Search cached prompts |
| export [file] | Export cache to JSON |
| import <file> | Import cache from JSON |
PRO Commands
| Command | Description |
|---------|-------------|
| cost | Show cost savings report |
| similar <prompt> | Find similar cached prompts |
| sync <action> | Sync cache between machines |
| serve | Start HTTP server |
Options
Global Options
-g, --global- Use global cache (~/.llmcache/cache)-p, --path <path>- Custom cache path
Set Options
-m, --model <name>- Model name (default: "default")-t, --ttl <duration>- Time to live (PRO): 7d, 24h, 30m--tags <tags>- Comma-separated tags (PRO)
Get Options
-m, --model <name>- Model name (default: "default")-r, --raw- Output only the response-o, --output <file>- Write response to file
Using Files
Use @ prefix to read from files:
llmcache set @prompt.txt @response.txt
llmcache get @prompt.txtFREE vs PRO
| Feature | FREE | PRO | |---------|------|-----| | Cache entries | 50 max | Unlimited | | Response size | 10KB | 10MB | | Storage backend | JSON | JSON, SQLite, Redis | | Cost tracking | - | Yes | | Semantic search | - | Yes | | Compression | - | Yes | | HTTP server | - | Yes | | Team sync | - | Yes | | TTL expiration | - | Yes |
PRO License
$18.99 one-time payment
Purchase at: https://pnkd.dev/llmcache
# Activate license
llmcache license activate LMC-XXXX-XXXX-XXXX-XXXX
# Check status
llmcache license statusProgrammatic API
const llmcache = require('pnkd-llmcache');
// Initialize
llmcache.init();
// Cache response
llmcache.set('What is AI?', 'AI is...', 'gpt-5-turbo');
// Get cached response
const result = llmcache.get('What is AI?', 'gpt-5-turbo');
if (result) {
console.log(result.response);
}
// Get stats
const stats = llmcache.stats();
console.log(`Entries: ${stats.entries}`);HTTP Server (PRO)
llmcache serve --port 3377Endpoints
GET /health- Health checkGET /cache?prompt=...&model=...- Get cached responsePOST /cache- Set cache entry (JSON body)GET /cache/list- List entriesGET /cache/search?q=...- Search entriesGET /stats- Get statistics
More PRO Tools
- ctxstuff PRO - Pack code for LLMs ($14.99)
- aiproxy PRO - One API for all LLMs ($18.99)
Support
- Issues: https://github.com/pnkd-dev/llmcache/issues
- Website: https://pnkd.dev/llmcache
License
MIT - pnkd.dev
