@0xdoublesharp/lru-cache-clustered
v2.0.0
Published
LRU Cache that is safe for clusters, based on lru-cache. Save memory by only caching items on the main thread via a promisified interface.
Downloads
205
Maintainers
Readme
Node's cluster module gives every worker its own heap, so an in-process cache duplicates across workers and every worker cold-starts alone. An 8-worker service with a 200 MB cache pays 1.6 GB to hold the same data eight times.
This package keeps a single lru-cache in the primary and lets every worker read and write it over cluster IPC. One copy of the data, shared warmth across workers, and atomic counters and single-flight fetches that stay correct cluster-wide. No Redis tier, no sidecar.
Highlights
| Capability | What it gives you |
| ------------------------------ | ---------------------------------------------------------------------------------------------------------- |
| One cache, N workers | The primary owns the data. Memory cost stays flat as you scale workers, instead of multiplying. |
| No per-worker cold start | The first worker to load a value warms it for every other worker. |
| Atomic counters | incr / decr execute on the primary, so they stay race-safe under any worker count. |
| Cluster-wide single-flight | Concurrent misses for the same key collapse to one fetch via fetch() / memoize(). |
| Atomic claims | setIfAbsent() lets exactly one worker win a key — perfect for idempotent intake or once-only init. |
| Pluggable codecs | wrap() layers gzip, MessagePack, or any symmetric encoder over a cache without changing call sites. |
| Per-namespace stats | Hits, misses, sets, deletes, evictions, size — ready to scrape, no extra wiring. |
| Rate-limiter-friendly TTLs | incr keeps the original window ticking instead of resetting it on every bump. |
| Structured IPC errors | Worker-side rejections preserve name, code, cause, and stack from the primary. |
Install
lru-cache is a peer dependency — install it alongside this package so you control the version.
npm install @0xdoublesharp/lru-cache-clustered lru-cache
pnpm add @0xdoublesharp/lru-cache-clustered lru-cache
yarn add @0xdoublesharp/lru-cache-clustered lru-cacheTypeScript first. Dual ESM + CJS. Requires Node ≥ 22.
The legacy package name
lru-cache-for-clusters-as-promisedis published from the same build at the same version, so existing imports keep working during a phased migration.
Quick start
import cluster from 'node:cluster';
import { availableParallelism } from 'node:os';
import { LRUCacheClustered } from '@0xdoublesharp/lru-cache-clustered';
LRUCacheClustered.bootstrap();
const cache = new LRUCacheClustered<string, string>({
namespace: 'sessions',
max: 1000,
ttl: 60_000,
});
if (cluster.isPrimary) {
for (let i = 0; i < availableParallelism(); i++) cluster.fork();
} else {
await cache.set('user:42', JSON.stringify({ name: 'ada' }));
console.log(await cache.get('user:42'));
// {"name":"ada"} - every worker sees the same value
}A few things worth knowing up front:
LRUCacheClusteredis the short alias forLRUCacheForClustersAsPromised. The long name is still exported.- Import in the primary before
cluster.fork(). The primary-side IPC listener is installed at module import. CallLRUCacheClustered.bootstrap()if you want that setup to be explicit. - This is a coordination layer, not a security boundary. Any code in any worker can use any namespace it knows; do not expose namespaces to untrusted callers.
When to use it
Reach for this package when you have a multi-worker Node service and want shared in-process caching without standing up a separate caching tier:
- Session and profile caches
- Rate limiters and quota counters
- Feature flag snapshots
- Deduplicating expensive API or database calls
- Any cache-aside pattern across workers
It is also a strong fit as the L1 in a multi-layer cache in front of Redis or Memcached. Hot keys are served in-process, the long tail falls through to the shared remote cache, and the origin only sees true cold misses.
Reach for something else when you need sharing across multiple machines (use Redis or Memcached, or layer this in front of one), or when your hottest path cannot tolerate an IPC hop on a miss. See Performance profile.
Examples
Runnable clustered server examples — see examples/README.md for run instructions and curl recipes.
clustered-users-server.ts— shared read-through user cache viamemoize()/fetch()clustered-rate-limit-server.ts— fixed-window rate limiting via atomicincr()clustered-session-server.ts— shared session storage viaset()/get()/delete()clustered-idempotency-server.ts— idempotent job intake viasetIfAbsent()clustered-compressed-documents-server.ts— compressed document caching viawrap()clustered-multilayer-redis-server.ts— clustered LRU as L1 in front of Redis as L2, with cluster-wide single-flight on cold keys
How it works
new LRUCacheClustered(...) branches at construction:
- In the primary (
cluster.isPrimary === true), the instance owns and operates on the in-processLRUCachefor its namespace directly — no IPC, no allocation per call. - In a worker, every operation becomes a typed IPC request to the primary; the returned Promise resolves with the response.
Instances in different workers that share a namespace operate on the same primary-side cache. Those instances should agree on cache options (max, ttl, allowStale, ...): reusing a namespace with conflicting options throws rather than silently keeping whichever process initialized it first.
Initialization semantics. In a worker,
new LRUCacheClustered(...)eagerly sends theinitmessage, butcache.readyis ordering-only and intentionally swallows init failure. Useawait cache.healthCheck()orawait LRUCacheClustered.getInstance(...)when startup should fail fast if the primary cannot register the namespace.
Performance profile
- Primary mode — operations dispatch directly to the local
lru-cacheinstance, bypassing the IPC machinery entirely (no message build, no request-ID allocation, no pending-response bookkeeping). - Worker mode — every cache operation is an IPC round trip through the primary.
- Hot misses —
fetch()andmemoize()collapse concurrent misses for the same key across workers, so origin work scales with unique keys, not concurrent callers. - Design tradeoff — pick this package when cross-worker sharing and single-copy memory matter more than per-call latency; pick plain per-process
lru-cachewhen your hottest path cannot afford the IPC hop.
Options
The serializable subset of lru-cache constructor options passes through (max, maxSize, maxEntrySize, ttl, allowStale, updateAgeOnGet, updateAgeOnHas, noDeleteOnStaleGet, ttlAutopurge). Plus:
| Option | Type | Default | Description |
| ----------- | ----------------------- | ----------- | ------------------------------------------------------------------------------------------------------------- |
| namespace | string | 'default' | Logical name. Instances sharing a namespace share state on the primary. |
| timeout | number | 100 | Worker IPC timeout in ms. |
| failsafe | 'resolve' \| 'reject' | 'resolve' | On worker IPC timeout: 'resolve' resolves with undefined; 'reject' rejects with Error('IPC timeout'). |
Function-valued lru-cache options such as dispose, disposeAfter, sizeCalculation, or fetchMethod do not cross IPC and are not supported by this wrapper.
failsafe: 'resolve'caveat. On timeout,'resolve'returnsundefinedfor every op, regardless of declared return type. Forget/peekthat is natural; forhas/set/delete/incr/decr/sizeit can surprise callers (undefined + 1 === NaN). Use'reject'if typed-shape correctness on timeout matters.
Size-bounded caches. When you use
maxSizeormaxEntrySize, providesizeon every write path (set,setIfAbsent,mSet,fetch,memoize, and the firstincr/decrfor a counter key).sizeCalculationdoes not cross IPC, so the primary cannot infer it for you.
Fail-fast startup.
LRUCacheClustered.getInstance()andcache.healthCheck()always reject if the primary cannot answer, regardless offailsafe, so you can use them as hard startup checks.
Key/value contract. Like
lru-cache, keys and values must be non-nullish. Passingnullorundefinedrejects instead of relying on ambiguous cache semantics.
API
Static
| Method | Description |
| ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| LRUCacheClustered.bootstrap() | Installs the primary-side cluster listener immediately. Useful when you want an explicit bootstrap call instead of relying on module import side effects. |
| LRUCacheClustered.getInstance(options) | Async factory. In a worker, awaits the init message so the primary has registered the namespace before returning. Preferred when worker startup should fail fast on init errors. |
| LRUCacheClustered.getAllCaches() | Returns the Map<namespace, LRUCache> registry. Primary only — throws in workers. |
Core
| Method | Returns | Notes |
| ------------------------------------------ | ------------------------- | --------------------------------------------------------- |
| get(key) | Promise<V \| undefined> | |
| set(key, value, { ttl?, size? }) | Promise<boolean> | |
| setIfAbsent(key, value, { ttl?, size? }) | Promise<boolean> | Atomic on the primary. false if the key already exists. |
| delete(key) | Promise<boolean> | |
| has(key) | Promise<boolean> | |
| peek(key) | Promise<V \| undefined> | Does not update LRU position. |
| clear() | Promise<void> | |
Multi
| Method | Returns | Notes |
| -------------------------------- | --------------------------------- | ------------------------------------------------------------------------------------- |
| mGet(keys) | Promise<Map<K, V \| undefined>> | |
| mSet(entries, { ttl?, size? }) | Promise<void> | entries: Iterable<[K, V] \| [K, V, { ttl?, size? }]>; outer opts apply as defaults. |
| mDelete(keys) | Promise<void> | |
Enumeration
| Method | Returns | Notes |
| -------------------------- | ------------------------------- | --------------------------------------------------------------- |
| keys() | Promise<K[]> | MRU first. |
| values() | Promise<V[]> | MRU first. |
| entries() | Promise<[K, V][]> | MRU first. |
| [Symbol.asyncIterator]() | AsyncIterableIterator<[K, V]> | for await (const [k, v] of cache). Materializes the full set. |
| dump() | Promise<[K, Entry][]> | Serializable snapshot. |
| load(entries) | Promise<void> | Restores from a dump(), preserving per-entry TTL metadata. |
| size() | Promise<number> | |
Counters and cache-aside
| Method | Returns | Notes |
| ---------------------------------------------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------ |
| incr(key, amount?, { ttl?, size? }) | Promise<number> | Atomic on the primary. ttl is set on the first write only; later increments do not reset it (rate limiters). |
| decr(key, amount?, { ttl?, size? }) | Promise<number> | Same. |
| fetch(key, fetcher, { ttl?, size?, forceRefresh }) | Promise<V> | Cache-aside with cluster-wide single-flight semantics. See Single-flight semantics. |
| memoize(cache, fn, keyFn, opts?) | (args) => Promise<V> | Top-level helper. Single-flight via cache.fetch(). See memoize helper. |
Lifecycle, metrics, tunables
| Method | Returns | Notes |
| ---------------------- | ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| getRemainingTTL(key) | Promise<number> | ms until expiry. Infinity for keys with no TTL; 0 for missing keys. |
| purgeStale() | Promise<boolean> | Removes expired entries. |
| healthCheck() | Promise<void> | Verifies that the primary can resolve the namespace and answer requests. |
| stats() | Promise<Stats> | { hits, misses, sets, deletes, evictions, size, namespace }. |
| destroy() | Promise<boolean> | Removes the namespace cache, stats, and primary-side coordination state. Later use of the same instance recreates it with the original options. |
| getCache() | LRUCache \| undefined | Underlying lru-cache for this namespace. Primary only. |
| ready | Promise<void> | Resolves once worker init has been dispatched. Useful for ordering only; use getInstance() if init failures should reject. |
| max(value?) | Promise<number> | Getter and setter. Setter preserves entries and remaining TTL metadata. |
| ttl(value?) | Promise<number> | Getter and setter. |
| allowStale(value?) | Promise<boolean> | Getter and setter. |
wrap — codec / compression
wrap(cache, codec) returns a typed view where values pass through an encode / decode pair on the way in and out. Use it for compression (gzip, brotli), serialization (MessagePack), or any custom symmetric transform. The library stays codec-agnostic — bring your own.
import { gzipSync, gunzipSync } from 'node:zlib';
import { LRUCacheClustered, wrap } from '@0xdoublesharp/lru-cache-clustered';
// Encode to a string (base64 here) so the wire format is Buffer-safe in workers.
// See the Buffer caveat below.
const inner = new LRUCacheClustered<string, string>({ namespace: 'big-blobs', max: 1000 });
const cache = wrap(inner, {
encode: (v: unknown) => gzipSync(Buffer.from(JSON.stringify(v), 'utf8')).toString('base64'),
decode: (raw: string) => JSON.parse(gunzipSync(Buffer.from(raw, 'base64')).toString('utf8')),
});
await cache.set('user:42', { id: 42, name: 'ada' });
await cache.get('user:42'); // decoded back to { id: 42, name: 'ada' }encode and decode may be sync or async. The wrapped surface covers value-touching ops (get, set, setIfAbsent, peek, mGet, mSet, values, entries, async iteration, fetch) plus the lifecycle and metric pass-throughs (has, delete, keys, size, clear, destroy, healthCheck, purgeStale, getRemainingTTL, stats).
incr / decr and dump / load are not wrapped — they speak in numbers or the raw stored form. Reach them via wrapped.cache if you need them.
Buffer-typed values. Cluster IPC serializes through JSON, which does not preserve
Buffer. If a codec storesBufferdirectly, in worker mode the decoded side will receive{ type: 'Buffer', data: number[] }and most binary APIs will reject it. Encode to a string (base64, hex) — or rehydrate insidedecode— when the wrapped cache is read from workers. Primary-only use is unaffected.
memoize helper
Cache-aside in one line. Concurrent calls for the same key coordinate through cache.fetch() so only one caller does the underlying work at a time.
import { LRUCacheClustered, memoize } from '@0xdoublesharp/lru-cache-clustered';
const cache = new LRUCacheClustered<string, User>({ namespace: 'users', ttl: 60_000 });
const getUser = memoize(
cache,
(id: string) => fetchUserFromDB(id),
(id) => `user:${id}`,
{ ttl: 60_000 },
);
await getUser('42'); // first call: hits DB
await getUser('42'); // second call: cachedSingle-flight semantics
Both memoize() and cache.fetch() coordinate through the primary so concurrent misses for the same key collapse to one in-flight fetch across instances and workers.
Passing forceRefresh: true skips both the cache lookup and any in-flight claim and starts a fresh leader fetch. Concurrent callers without forceRefresh still wait on whichever fetch is in flight and reuse its result.
The cache timeout option only bounds each worker IPC request. It does not cancel user fetcher work after a worker owns the primary-side single-flight lock, so production fetchers should enforce their own upstream timeout or abort policy.
Errors
Worker mode. When a primary-side handler throws, the worker's promise rejects with a reconstructed Error carrying the original name, message, code, stack, and cause chain. The rejected value is always a plain Error (subclass identity is not crossed over IPC), but .name, .code, and .cause are intact, so logging and cause-chain walking work. Errors travel as { name, message, code?, stack?, cause? } on the wire.
Primary mode. No IPC: a thrown Error rejects as-is (subclass identity preserved); a thrown non-Error value is wrapped in new Error(String(value)). For Error throws the two modes are observably equivalent.
Debugging
DEBUG=lru-cache-clustered-* node app.jsAvailable namespaces:
lru-cache-clustered-primary— cache creation, registry eventslru-cache-clustered-messages— every request/response over IPC
Upgrading from 1.x
The 2.x line is a TypeScript rewrite on top of lru-cache@11 with renamed methods and options. See docs/migration.md for the full method, option, and package mapping, and CHANGELOG.md for the complete 2.0 release notes.
License
MIT — see LICENSE.
