@nxtedition/cache
v2.1.8
Published
A two-tier async cache with SQLite persistence, in-memory LRU, stale-while-revalidate, cross-process deduplication, and automatic request deduplication.
Maintainers
Keywords
Readme
@nxtedition/cache
A two-tier async cache with SQLite persistence, in-memory LRU, stale-while-revalidate, cross-process deduplication, and automatic request deduplication.
Features
- Two-tier storage — In-memory LRU cache backed by SQLite on disk
- Stale-while-revalidate — Serve stale data synchronously while refreshing in the background
- Request deduplication — Concurrent fetches for the same key share a single in-flight request
- Cross-process locking — SQLite-based distributed locks prevent redundant work across processes/threads
- Async value resolution — Transparently fetches missing values via a user-defined
valueSelector - Binary support — Store and retrieve
Buffer/Uint8Arrayalongside JSON values - Size-bounded storage — Configurable max database size with automatic eviction of oldest entries
- Custom serialization — Pluggable
serialize/deserializefor non-JSON value types
Usage
import { Cache } from '@nxtedition/cache'
const cache = new Cache(
'./my-cache.db', // SQLite file path, or ':memory:'
async (id: string) => {
const res = await fetch(`https://api.example.com/items/${id}`)
return res.json()
},
(id: string) => id, // keySelector: derive cache key from arguments
{
ttl: 60_000, // 60 s before value is considered stale
stale: 30_000, // serve stale for 30 s while revalidating
},
)
const result = cache.get('item-123')
if (result.async) {
// Cache miss — value is being fetched
const value = await result.value
} else {
// Cache hit — value returned synchronously
const value = result.value
}API
new Cache(location, valueSelector?, keySelector?, opts?)
| Parameter | Type | Description |
| --------------- | ---------------------------------- | --------------------------------------------- |
| location | string | SQLite database path, or ':memory:' |
| valueSelector | (...args) => V \| PromiseLike<V> | Function to fetch a value on cache miss |
| keySelector | (...args) => string | Function to derive a cache key from arguments |
| opts | CacheOptions<V> | Optional configuration |
CacheOptions
| Option | Type | Default | Description |
| ------------ | ---------------------------------- | ------------------------------------- | ------------------------------------------------------------------------------ |
| ttl | number \| (value, key) => number | MAX_SAFE_INTEGER | Time-to-live in milliseconds. After this, the entry is stale. |
| stale | number \| (value, key) => number | MAX_SAFE_INTEGER | Stale-while-revalidate window in ms. After ttl + stale, the entry is purged. |
| memory | MemoryOptions \| false \| null | { maxSize: 16MB, maxCount: 16384 } | In-memory cache config, or false/null to disable. |
| database | DatabaseOptions \| false \| null | { timeout: 20, maxSize: 128MB } | SQLite config, or false/null to disable persistence. |
| lock | LockOptions \| false \| null | { minTimeout: 1, maxTimeout: 1000 } | Cross-process lock config, or false/null to disable. |
| serializer | Serializer<V> | JSON + ArrayBufferView passthrough | Custom { serialize, deserialize } for value encoding. |
MemoryOptions
| Option | Type | Default | Description |
| ---------- | -------- | -------------------------- | ---------------------------------------------- |
| maxSize | number | 16 * 1024 * 1024 (16 MB) | Maximum total size in bytes of cached entries. |
| maxCount | number | 16 * 1024 (16384) | Maximum number of entries in memory. |
DatabaseOptions
| Option | Type | Default | Description |
| --------- | -------- | ---------------------------- | ----------------------------------------------------------------- |
| timeout | number | 20 | SQLite busy timeout in milliseconds. |
| maxSize | number | 128 * 1024 * 1024 (128 MB) | Maximum database file size. Oldest entries are evicted when full. |
LockOptions
Cross-process locking prevents multiple processes from computing the same value simultaneously. The lock timeout is adaptive — it uses an exponential moving average (EMA) of valueSelector durations to estimate how long to wait before taking over a lock.
| Option | Type | Default | Description |
| ------------ | -------- | ------- | -------------------------------------------------------------------------- |
| minTimeout | number | 1 | Minimum lock timeout in ms. Also the starting timeout before EMA warms up. |
| maxTimeout | number | 1000 | Maximum lock timeout in ms. Caps the EMA-derived timeout. |
Serializer<V>
| Method | Signature | Description |
| ------------- | ---------------------------------------------- | --------------------------- |
| serialize | (value: V) => Buffer \| Uint8Array \| string | Encode a value for storage. |
| deserialize | (data: Buffer \| string) => V | Decode a stored value. |
The default serializer passes ArrayBufferView values through as-is and uses JSON.stringify/JSON.parse for everything else.
CacheResult<V>
Both get() and peek() return a CacheResult<V>, a discriminated union on the async property:
| async | value | Meaning |
| ------- | ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| false | V \| undefined | Cache hit — the value is available synchronously. Also returned for stale entries (a background refresh is triggered). undefined when peek() has no cached entry. |
| true | Promise<V> | Cache miss — value is a Promise that resolves when the valueSelector completes. |
const result = cache.get('key')
if (result.async) {
const value = await result.value // miss — await the fetch
} else {
const value = result.value // hit (fresh or stale) — use directly
}Methods
cache.get(...args): CacheResult<V>
Returns a cached value or triggers a fetch on cache miss. If the entry is stale and the valueSelector is async, returns the stale value synchronously while a background refresh runs.
cache.peek(...args): CacheResult<V>
Same as get() but does not trigger a refresh on cache miss or stale entry. Returns { value: undefined, async: false } for missing/expired entries, or the stale value if within the stale window.
cache.refresh(...args): CacheResult<V>
Forces a new fetch via valueSelector regardless of cache state. Unlike get(), concurrent refresh() calls for the same key do not deduplicate — each call invokes the valueSelector. However, get() calls during a pending refresh() will return the in-flight promise.
cache.delete(...args): void
Remove an entry from both memory and SQLite. Also cancels any in-flight deduplication for that key — a pending fetch will still resolve for its callers, but the result is not written to the cache.
cache.purgeStale(): void
Remove all expired entries (past ttl + stale) from both the in-memory cache and SQLite. Also cleans up stale lock rows older than 1 hour and runs PRAGMA wal_checkpoint(TRUNCATE) + PRAGMA optimize.
cache.close(): void
Close the SQLite database and release resources. Clears all in-flight deduplication. Operations after close() throw.
cache.stats
Returns runtime statistics:
{
lock: { timeout, mean, stddev } | undefined,
dedupe: { size },
memory: { size, maxSize, count, maxCount } | undefined,
database: { location, size } | undefined,
}Deduplication
Concurrent calls to get() for the same key share a single in-flight Promise. The valueSelector is called only once:
// valueSelector is called once, both promises resolve to the same value
const [a, b] = await Promise.all([cache.get('key').value, cache.get('key').value])If a fetch fails, the deduplication entry is cleaned up and subsequent calls retry.
Calling cache.delete(key) while a fetch is in-flight invalidates the deduplication entry. The pending promise still resolves for its callers, but the result is not written to the cache.
refresh() does not deduplicate with itself — each call starts a new fetch. However, get() calls see the most recent pending promise.
Stale-While-Revalidate
When an entry's TTL has expired but is still within the stale window, get() returns the stale value synchronously (async: false) and triggers a background refresh (when the valueSelector is async). If the refresh fails, the stale value is preserved.
Once the stale window expires, the entry is purged entirely and the next get() returns async: true.
|--- ttl ---|--- stale ---|
fresh stale expired
↓ ↓ ↓
sync hit sync hit async miss
+ bg refreshCross-Process Locking
When multiple processes or threads share the same SQLite database, the lock mechanism prevents redundant valueSelector calls. Process A acquires a lock, computes the value, and writes it. Process B sees the lock, waits for the estimated completion time (EMA-based), then reads the value from SQLite.
If the lock holder crashes, the lock becomes stale after 3 × lockTimeout and another process steals it.
Off-Peak Purge
All cache instances listen on the nxt:offPeak BroadcastChannel. When a message is received, purgeStale() is called on every active instance, enabling coordinated cleanup during low-traffic periods.
Scripts
yarn test # run tests
yarn test:coverage # run tests with coverage report (90%+ enforced)
yarn typecheck # type-check without emitting
yarn build # build for publishing