cluster-shared-cache
v1.0.4
Published
A shared in-memory cache for Node.js cluster applications
Maintainers
Readme
API Reference
Constructor
const cache = new ClusterSharedCache(options);Options:
maxSize(number): Maximum number of items in cache (default: 1000)ttl(number): Default TTL in milliseconds (default: 300000 = 5 minutes)checkPeriod(number): Cleanup interval in milliseconds (default: 60000 = 1 minute)
Methods
set(key, value, ttl?)
Store a value in the cache.
await cache.set('user:123', userData, 300000); // 5 minutes TTL
await cache.set('config', configData); // Uses default TTLget(key)
Retrieve a value from the cache.
const user = await cache.get('user:123');
if (user) {
console.log('Cache hit!', user);
} else {
console.log('Cache miss');
}delete(key)
Remove a specific key from the cache.
await cache.delete('user:123');clear()
Clear all items from the cache.
await cache.clear();has(key)
Check if a key exists in the local cache.
if (cache.has('user:123')) {
console.log('Key exists locally');
}size()
Get the number of items in the local cache.
console.log('Cache size:', cache.size());keys(), values(), entries()
Get arrays of keys, values, or entries from the local cache.
console.log('All keys:', cache.keys());
console.log('All values:', cache.values());
console.log('All entries:', cache.entries());How It Works
- Master Process: Maintains the authoritative cache state
- Worker Processes: Keep local copies and sync with master via IPC
- Cache Operations:
SET: Updates master cache and broadcasts to all workersGET: Checks local cache first, then queries master if neededDELETE: Removes from master and all workers
- TTL Management: Both master and workers handle expiration independently
- Memory Management: LRU-style eviction when maxSize is reached
Performance Considerations
- Local Cache First: GET operations check the local worker cache first for maximum speed
- Async Operations: All cache operations are asynchronous to prevent blocking
- Efficient IPC: Minimal message passing between processes
- Memory Limits: Configurable max size prevents memory leaks
Best Practices
- Initialize Early: Set up the cache before forking workers
- Reasonable TTL: Don't set TTL too low to avoid constant cache misses
- Monitor Memory: Keep track of cache size in production
- Error Handling: Always handle cache misses gracefully
- Cleanup: Call
cache.destroy()when shutting down
Troubleshooting
Cache Not Syncing
Ensure the cache is initialized in the master process before forking workers.
High Memory Usage
Reduce maxSize or lower TTL values to limit memory consumption.
Slow Performance
Check if TTL is too aggressive or if you're over-using the cache.
