npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

intervache

v0.1.1

Published

Fragment-based interval cache for time series and range data.

Readme

Intervache

Fragment cache for time series data.

Intervache — a fragment cache for time-series that actually works. I built this because every product with a chart, timeline, or metric view eventually needs the same thing: a cache that understands intervals. If a user pans 3 months left → 2 weeks right → zooms into a 10-day window, you end up juggling dozens of overlapping fetches. Traditional caches can’t reason about partial coverage, invalidation, TTL, or merging of time-windowed data, so teams reinvent half-broken interval trees, bespoke LRU hacks, or “just refetch it” heuristics.

Intervache makes the half-open interval the primitive. It keeps fragments non-overlapping, tells you exactly what’s missing, merges when safe, splits when needed, respects TTL, and performs LRU eviction after merges. It’s built to survive adversarial sequences (thrashing, fetch races, extreme timestamps) without collapsing into fragmentation or stale resurrection.

Notable properties:

  • Precise gap detection — fetch only what’s missing.
  • Zero-overlap invariant — all inserts/upserts keep the structure sane.
  • TTL monotonicity — expired fragments never reappear, even across serialization.
  • Post-merge LRU — eviction happens after merges, preserving semantic correctness.
  • Optional recorder for debugging fetch races, oscillation, and eviction behavior.

It’s deliberately small, predictable, and opinionated. If you’ve ever written a timeline, a chart viewer, a log explorer, or a metrics dashboard and felt the creeping horror of interval arithmetic leaking everywhere, this is the thing I wish I’d had.

Why this exists

Fetching time windows piecemeal is easy until users pan across already-fetched spans, at which point you quietly reinvent interval math. Over-fetching wastes bandwidth; under-fetching yields holes; tracking overlap by hand is error-prone. Intervache keeps fragments non-overlapping, tells you exactly what to fetch next, and enforces eviction/TTL rules so caches don’t grow feral.

Problem

You're building something to display or fetch historical time series data. Users pan and zoom across months of records. Fetching everything upfront is impractical. Fetching on-demand creates a coordination problem:

User views January    → fetch Jan 1-31
User views February   → fetch Feb 1-28
User views Jan 15 - Feb 15 → ???

Do you re-fetch? You already have it. Do you track what's cached? Now you're implementing interval arithmetic.

Intervache handles this. It tracks non-overlapping cached intervals, tells you what's missing, and manages eviction, expiry, and fragment coalescing.

Install

bun add intervache

Why Intervache

  • Cache by intervals, not keys: half-open [start, end) fragments.
  • Precise gaps: get/peek return hits and misses so you fetch only what’s missing.
  • Eviction that respects reality: TTL + LRU with custom costFn and maxCost.
  • Zero-overlap invariant: inserts/merges keep fragments non-overlapping; auto-merge optional.
  • Built-in split/merge helpers: slicers and mergers for arrays and typed arrays.
  • Persistence: serialize/deserialize/toJSON with validation.
  • Hooks: observe hit/miss/evict/put/change without coupling core logic.
  • Tested hard: adversarial sequences for TTL monotonicity, eviction-after-merge, fetch races, and oscillation.

What’s novel here

  • TTL monotonicity: expired fragments never resurrect, even across deserialize.
  • Post-merge eviction correctness: cost is checked after merges, so eviction respects actual footprint.
  • LRU discipline on partial hits: access ordering accounts for partial window hits.
  • Flight recorder: optional ring buffer of recent events for debugging without noisy logging.

Minimal example

import { Intervache } from "intervache";

const cache = new Intervache<number[]>();
const { hits, misses } = cache.get(0, 60);

for (const gap of misses) {
  const data = await fetchData(gap.start, gap.end); // your loader
  cache.put(gap.start, gap.end, data);
}

const { hits: filled } = cache.get(0, 60);
// filled now holds the assembled 0–60 range

Architectural Comparisons

Intervache is designed specifically for continuous interval data. Standard caching strategies often struggle with this because they optimize for different constraints.

vs. React Query, SWR, or RTK Query

These libraries are excellent Request Caches. They cache the result of an async function based on a serialization of its arguments (e.g., ["data", 0, 100]).

  • The limitation: They treat fetch(0, 100) and fetch(50, 150) as unrelated keys. They cannot detect that you already have 50% of the data needed for the second request.
  • The Intervache difference: Intervache is a Data Cache. It understands the geometry of the requests. It can answer the second request by returning the cached [50, 100) segment and telling you to only fetch [100, 150). You can use Intervache inside a React Query fetcher to minimize network bandwidth.

vs. Standard Key-Value Caches (Map / LRU)

Standard caches assume discrete keys (e.g., ID 123, ID 124).

  • The limitation: Discrete maps have no concept of "between." If you cache t:100 and t:101, the map does not know you possess the range [100, 102). They cannot handle partial hits, merging, or slicing.
  • The Intervache difference: Intervache treats keys as continuous ranges. It handles the complex logic of identifying gaps, merging adjacent fragments, and slicing data when invalidating sub-ranges.

vs. Interval Trees / Segment Trees

These are search algorithms, not caches.

  • The limitation: A raw interval tree can tell you what overlaps, but it doesn't manage the lifecycle of the data. It doesn't inherently handle LRU eviction, memory pressure, TTL expiration, or data coalescing.
  • The Intervache difference: Intervache uses interval logic internally but wraps it in a state machine that manages memory usage, expiration, and data mutation (splitting/merging) automatically.

vs. Ring Buffers

Ring buffers are efficient for sliding windows or append-only streams.

  • The limitation: They assume a fixed, contiguous window. They break down during "pan and zoom" interactions where users might jump backward in time, zoom out to view a year, or zoom in to view a minute.
  • The Intervache difference: Intervache supports sparse coverage. It allows you to hold "islands" of high-resolution data (e.g., last week + last year's dip) without filling the memory for the empty space in between.

Behavior intuition

  • Fragment counts stay bounded under adversarial inserts because overlaps are invalidated before insert and optional auto-merge coalesces adjacency.
  • Eviction cost stays stable because every merge is followed by a cost check, triggering LRU eviction in current hit order.

Feature Map

| Category | APIs | Notes | | ------------------ | ----------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- | | Query | get, peek, at, has, gaps, coverage | Hits + misses, LRU only on get/at | | Insert/Update | put, putFragment, putAll, upsert | Upsert = invalidate overlaps then put | | Fetch | fetch, prefetch | Gaps fetched in parallel | | Mutate | invalidate, invalidateIf, trimBefore/After, trimToWindow, trimToRecent, coalesce, clear | Splits with slicer where needed | | Transform | mapInPlace | Recomputes cost and enforces maxCost via LRU | | Iterate/Introspect | for...of, range, toArray, find/some/every/forEach, bounds, span, stats, clone, resetStats | clone is deep on entries | | Hooks | on: { hit, miss, evict, put, change } | change batched when using batch | | Persistence | serialize, deserialize, toJSON, fromJSON | Deserialize validates ordering, rejects overlap, enforces budget | | Batching | batch(fn) | Suppresses change until fn completes |

Options Cheat Sheet

| Option | Purpose | Default | | ----------- | ----------------------------- | ----------------- | | maxCost | Cost ceiling for LRU eviction | Infinity | | costFn | Estimate fragment cost | () => 1 | | ttl | Default TTL (ms) | null (never) | | slicer | Slice data on splits | null (no slice) | | merger | Merge adjacent data | null (no merge) | | autoMerge | Merge on insert | false | | on | Hooks | {} |

Built-ins

| Helper | Use case | | ---------------------------- | -------------------------------------- | | arraySlicer(interval) | Slice JS arrays by uniform interval | | typedArraySlicer(interval) | Slice typed arrays by uniform interval | | arrayMerger() | Concatenate JS arrays | | float64ArrayMerger() | Concatenate Float64Array | | binarySearchSlicer(fn) | Slice sorted but irregular series |

Guarantees

  • No overlaps: inserts/upserts/invalidate/trim keep fragments disjoint.
  • Auto-merge (when enabled) only merges adjacent fragments that can merge, keeping maximal adjacency.
  • TTL monotonic: expired fragments never resurrect, including deserialize.
  • Eviction after merge: cost checked post-merge; LRU eviction respects hit order (including partial hits).
  • Serialization idempotence: serialize(deserialize(serialize(x))) is stable.
  • Coverage monotonic: invalidation/trim cannot increase covered span.

Observability

  • Optional FlightRecorder(capacity): attach via new Intervache({ recorder }) to record recent events (put/hit/miss/evict/fetch) in a bounded ring.
  • Use recorder.dump() to inspect oldest→newest; clear() to reset.
  • Defaults to off; low allocation, meant for debugging/telemetry, not production logging.
import { Intervache, FlightRecorder } from "intervache";

const recorder = new FlightRecorder(100);
const cache = new Intervache({ recorder });

// ... later, inside an error handler ...
console.table(recorder.dump());
// prints: [{ type: 'put', ts: 123... }, { type: 'evict', reason: 'lru'... }]

Quick Start

import { Intervache } from "intervache";

const cache = new Intervache<number[]>();

// Query returns hits (cached) and misses (gaps)
const { hits, misses } = cache.get(0, 100);

// Fill the gaps
for (const { start, end } of misses) {
  const data = await fetchData(start, end);
  cache.put(start, end, data);
}

// Or use fetch() to do both:
const data = await cache.fetch(0, 100, async ({ start, end }) => {
  return fetchData(start, end);
});

API

Constructor

const cache = new Intervache<T>({
  maxCost: 1000, // LRU eviction threshold
  costFn: (f) => f.data.length, // Cost estimator
  ttl: 60_000, // Default TTL (ms)
  slicer: mySlicer, // Sub-range extractor
  merger: myMerger, // Adjacent combiner
  autoMerge: true, // Merge on insert
  on: { hit, miss, evict, put, change },
});

Query

cache.get(start, end); // → { hits, misses }
cache.peek(start, end); // Same, without updating LRU
cache.at(point); // Fragment at point
cache.has(start, end); // Fully cached?
cache.gaps(start, end); // Just the misses
cache.coverage(start, end); // { covered, missing, ratio }

Insert

cache.put(start, end, data, ttl?);    // Insert (must not overlap)
cache.putFragment(fragment, ttl?);    // From object
cache.putAll(fragments, ttl?);        // Multiple
cache.upsert(start, end, data, ttl?); // Replace overlapping

Fetch

// Query + auto-fetch misses
const frags = await cache.fetch(start, end, loader, ttl?);

// Fire-and-forget
await cache.prefetch(start, end, loader, ttl?);

Mutate

cache.invalidate(start, end); // Remove overlapping
cache.invalidateIf(predicate); // Remove by condition
cache.trimBefore(cutoff); // Drop old data
cache.trimAfter(cutoff); // Drop future data
cache.trimToWindow(start, end); // Keep only range
cache.trimToRecent(duration); // Sliding window
cache.coalesce(); // Merge adjacent
cache.clear(); // Reset

Iterate

for (const frag of cache) {
} // All fragments
for (const frag of cache.range(s, e)) {
} // In range

cache.toArray(); // Snapshot
cache.forEach(fn); // Callback
cache.find(predicate); // First match
cache.some(predicate); // Any match
cache.every(predicate); // All match
cache.mapInPlace(fn); // Transform data

Inspect

cache.size; // Fragment count
cache.isEmpty; // No fragments?
cache.cost; // Total cost
cache.bounds(); // { start, end } or null
cache.span(); // end - start
cache.stats(); // { size, cost, hits, misses, evictions, hitRatio }
cache.clone(); // Independent copy
cache.resetStats(); // Clear counters

Batch

// Suppress change events until done
cache.batch(() => {
  cache.put(0, 10, a);
  cache.put(10, 20, b);
  cache.put(20, 30, c);
});

Serialize

const data = cache.serialize();
cache.deserialize(data);

const json = cache.toJSON();
const restored = Intervache.fromJSON<T>(json, opts);

Reliability

Intervache is tested with adversarial sequences that stress:

| Scenario | Focus | | -------------------------- | -------------------------------------------------- | | Pathological fragmentation | Many tiny adjacent fragments + overlapping inserts | | Split/merge thrashing | Large interval + tiny inserts + trims/invalidate | | Timestamp extremes | Near min/max safe ints with TTL monotonicity | | Eviction races | Merges that trigger post-merge LRU eviction | | Fetch races | Overlapping fetches resolving out-of-order | | High-frequency oscillation | Repeated insert/merge/invalidate/trim loops |

Slicing

When fragments split (via invalidate, trim, or partial queries), the slicer extracts correct sub-ranges:

import { Intervache, arraySlicer } from "intervache";

const cache = new Intervache<number[]>({
  slicer: arraySlicer(60_000), // 1 element per minute
});

cache.put(0, 360_000, [1, 2, 3, 4, 5, 6]);
cache.invalidate(120_000, 240_000);
// Fragment 0: [0, 120_000) → [1, 2]
// Fragment 1: [240_000, 360_000) → [5, 6]

Built-in slicers for uniform intervals:

arraySlicer<T>(interval); // T[]
typedArraySlicer<A>(interval); // TypedArray

Custom slicer for timestamped records:

import { Intervache, binarySearchSlicer } from "intervache";

// ...

// Optimized for irregular data (e.g. candles with weekends)
const cache = new Intervache<Record[]>({
  // O(log n) slicing instead of O(n) filtering
  slicer: binarySearchSlicer((r) => r.t),
});

Merging

Auto-merge adjacent fragments to reduce fragmentation:

import { Intervache, arrayMerger } from "intervache";

const cache = new Intervache<number[]>({
  merger: arrayMerger(),
  autoMerge: true,
});

cache.put(0, 10, [1, 2]);
cache.put(10, 20, [3, 4]);
// Single fragment: [0, 20) → [1, 2, 3, 4]

Built-in mergers:

arrayMerger<T>(); // Concatenate arrays
float64ArrayMerger(); // Concat Float64Arrays

Custom merger:

const cache = new Intervache<string>({
  merger: (left, right) => left.data + right.data,
  autoMerge: true,
});

Performance

| Operation | Complexity | | -------------------- | ------------------------ | | get, has | O(log n + k) | | put | O(n) worst case (splice) | | at | O(log n) | | invalidate, trim | O(n) | | coalesce | O(n) | | eviction | O(n) |

For >10K fragments, use autoMerge to keep counts low.

License

MIT