npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

tinystats

v0.1.0

Published

Zero-latency request counter with tiered time-series storage

Readme

tinyStats

Zero-latency request counter with tiered time-series storage.

Hot path cost is a single Map.set(). Counts are flushed asynchronously on a configurable interval. Storage is pluggable via adapters.

How it works

increment(key)          → in-memory Map (nanoseconds, no I/O)
flush every 5s          → adapter writes deltas to storage
rollup every hour       → hourly rows → daily aggregates (after 3 days)
rollup every day        → daily rows  → monthly aggregates (after 30 days)

Install

npm install tinyStats

Peer dependencies: pg and/or ioredis depending on which adapters you use.

Development

npm install          # Install dependencies
npm run build        # Build to dist/ (ESM + CJS + types)
npm test             # Run unit and scale tests
npm run test:watch   # Run tests in watch mode

Tests include:

  • Unit tests for collector behavior (double-buffer, error handling, flush)
  • Scale tests validating memory footprint (10K, 100K keys)
  • Performance benchmarks (increment latency, throughput, non-blocking flush)

The build outputs:

  • dist/index.js — ESM bundle
  • dist/index.cjs — CommonJS bundle
  • dist/index.d.ts — TypeScript declarations
  • Source maps for both formats

Quick start

import { StatsCollector, PostgresAdapter, RollupJob } from 'tinyStats';
import { Pool } from 'pg';

const pool  = new Pool({ connectionString: process.env.DATABASE_URL });
const stats = new StatsCollector({ adapter: new PostgresAdapter(pool) });
const rollup = new RollupJob(pool);

rollup.start();

// Hot path — this is the entire overhead
stats.increment('product:abc123:views');

// Graceful shutdown
process.on('SIGTERM', async () => {
  rollup.stop();
  await stats.destroy();
});

Run the schema once before first use:

psql $DATABASE_URL -f node_modules/tinyStats/schema.sql

Performance footprint

Memory (per active key in buffer):

  • ~50-100 bytes per key-value pair in the active Map
  • 10K active keys ≈ 0.5-1 MB
  • 100K active keys ≈ 5-10 MB
  • Double-buffer swap temporarily holds 2× during flush

CPU per increment:

  • Single Map.set() operation ≈ 10-50 nanoseconds
  • 1M increments/sec ≈ 5-10% of one core (amortized hash table cost)
  • Zero I/O blocking, zero syscalls

Flush operation:

  • Cost depends entirely on adapter (network + storage write)
  • PostgresAdapter: single INSERT ... ON CONFLICT per flush batch
  • RedisAdapter: single pipeline with HINCRBY commands
  • Active buffer swaps immediately — hot path never waits

At 10K active keys with 5s flush interval, steady-state memory is ~1 MB with negligible CPU overhead.

Adapters

| Adapter | Use case | |---|---| | LocalAdapter | Dev, testing, no persistence | | PostgresAdapter | Primary storage with full three-tier rollup | | RedisAdapter | Hot-tier cache, hourly hashes with TTL | | CompositeAdapter | Fan-out to multiple adapters simultaneously |

Pass your existing client instances directly — adapters accept structural interfaces, not library-specific types:

// Existing clients work as-is
const redis = new Redis(existingConfig);
const pool  = new Pool(existingConfig);

new RedisAdapter(redis);
new PostgresAdapter(pool);
new RollupJob(pool);        // same pool instance, no extra connection

node-redis v4 is not compatible out of the box — it uses .multi() instead of .pipeline(). A thin wrapper implementing RedisClient is required.

Tiered storage

| Tier | Resolution | Retention | Table | |---|---|---|---| | Hot | Hourly | 3 days | stats_hourly (partitioned) | | Warm | Daily | 30 days | stats_daily | | Cold | Monthly | 18 months | stats_monthly |

Retention and flush interval are configurable:

new StatsCollector({ adapter, flushIntervalMs: 10_000 });

new RollupJob(pool, {
  hourlyRetentionDays: 7,
  dailyRetentionDays:  60,
  onRollup: (r) => logger.info(r),
});

Querying

import { queryStats } from 'tinyStats';

// Automatically selects the right tier based on date range
const rows = await queryStats(pool, 'product:abc123:views', from, to);
// [{ bucket: Date, count: number, tier: 'hourly' | 'daily' | 'monthly' }]

Multiple instances

Rollup jobs are safe to run across multiple instances. Each job acquires a pg_try_advisory_xact_lock before executing — only one instance runs per cycle, the rest skip silently. Lock is transaction-scoped and released automatically on completion or crash.

Custom lock IDs: If multiple apps share the same database, specify unique advisory lock IDs to prevent conflicts:

new RollupJob(pool, {
  advisoryLocks: {
    hourlyToDaily:   0x12_34_56_01n,
    dailyToMonthly:  0x12_34_56_02n,
  }
});

Default locks are 0x574101n and 0x574102n.

Storage sizing

At the default retention settings (3d hourly / 30d daily / 18mo monthly), each tracked entity uses a maximum of 120 rows across all three tiers.

| Active entities | Approx. storage | |---|---| | 100K | ~1.5 GB | | 1M | ~15 GB | | 10M | ~150 GB |

Rows are only written for windows where activity occurred — sparse entities use proportionally less.

Flush error behaviour

On adapter failure, unwritten deltas are re-merged into the active buffer and retried on the next flush cycle. A persistent adapter failure causes slight over-counting on recovery rather than data loss.

API

class StatsCollector {
  increment(key: string, delta?: number): void
  incrementMany(entries: Iterable<[string, number]>): void
  flush(): Promise<void>
  destroy(): Promise<void>
  readonly pendingCount: number
}

class RollupJob {
  start(): void
  stop(): void
  run(job: 'hourly-to-daily' | 'daily-to-monthly'): Promise<RollupResult>
}