npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@markwharton/api-core

v1.8.1

Published

Shared utilities for API client packages

Readme

@markwharton/api-core

Shared utilities for API client packages.

Install

npm install @markwharton/api-core

API Reference

ApiError

Base error class for API client libraries. Provides status, rawResponse, and a static fromResponse() factory. Library-specific error classes (LPError, PayrollError, HRError) extend this.

import { ApiError } from '@markwharton/api-core';

// Use directly
throw new ApiError('Not found', 404, { rawResponse: body });

// Create from HTTP response
const error = ApiError.fromResponse(statusCode, responseText);
console.log(error.message, error.status);

// Extend for library-specific errors
class MyError extends ApiError {
  constructor(message: string, status: number) {
    super(message, status);
    this.name = 'MyError';
  }
}

ClientConfig

Configuration interface shared by all API client libraries. Each client extends this with API-specific required fields.

import type { ClientConfig, OnRequestCallback } from '@markwharton/api-core';

interface MyClientConfig extends ClientConfig {
  apiToken: string;  // API-specific required fields
}

| Field | Type | Description | |-------|------|-------------| | baseUrl | string? | Override the default API base URL | | onRequest | OnRequestCallback? | Callback invoked before each request (for logging/debugging) | | retry | RetryConfig? | Retry configuration for 429/503 responses | | cacheInstance | Cache? | Custom cache backend (e.g., LayeredCache with persistent stores) |

OnRequestCallback signature: (info: { method: string; url: string; description?: string }) => void

RateLimiter

Sliding window rate limiter. Enforces a maximum number of requests per time window. Callers that exceed the limit are queued and released when capacity opens.

import { RateLimiter } from '@markwharton/api-core';

const limiter = new RateLimiter(5);  // 5 requests per second (default window: 1000ms)

// Acquire a token before each request — resolves immediately if under the limit
await limiter.acquire();
await fetch(url);

| Method | Parameters | Returns | Description | |--------|------------|---------|-------------| | constructor | maxRequests: number, windowMs?: number | - | Create limiter (default window: 1000ms) | | acquire | - | Promise<void> | Acquire token. Resolves immediately or queues until capacity opens |

How queuing works

acquire() tracks timestamps of recent requests in a sliding window. If fewer than maxRequests timestamps fall within the window, it adds a timestamp and resolves immediately. If the window is full, the returned Promise is added to an internal queue — the caller simply awaits longer. No 429 is returned, no retry happens; the delay is transparent.

How queued requests clear

When a request is queued, scheduleFlush() sets a setTimeout for when the oldest timestamp will exit the sliding window (windowMs - elapsed + 1ms). When the timer fires, expired timestamps are pruned and queued promises are resolved up to maxRequests. If requests remain in the queue, another flush is scheduled. This repeats until the queue drains.

Two-layer protection

RateLimiter is the first layer — it throttles outbound requests to prevent 429s. fetchWithRetry is the second layer — a backstop that retries with exponential backoff if a 429 comes back anyway (e.g., multiple serverless instances sharing a global API limit). In PayrollClient, the flow is: acquire()fetch → if 429, fetchWithRetry handles backoff.

Per-instance state: RateLimiter uses an in-memory sliding window. Each serverless instance has its own independent limiter. See Known Limitations in the root README.

pickFields

Allowlist-based field selection for API responses. Only fields present in keys are included. Missing fields are omitted (not set to undefined).

import { pickFields } from '@markwharton/api-core';

const raw = { id: 1, name: 'Alice', secret: 'hidden' };
const safe = pickFields<{ id: number; name: string }>(raw, ['id', 'name']);
// { id: 1, name: 'Alice' }

resolveRetryConfig

Resolve a partial retry config into a fully-specified config with standard defaults. Returns undefined when config is not provided (retry disabled).

import { resolveRetryConfig } from '@markwharton/api-core';

const config = resolveRetryConfig({});
// { maxRetries: 3, initialDelayMs: 1000, maxDelayMs: 10000 }

const disabled = resolveRetryConfig(undefined);
// undefined

parseJsonErrorResponse

Parse a JSON error response body into a human-readable message. Handles { message }, { error }, and plain text formats.

import { parseJsonErrorResponse } from '@markwharton/api-core';

const { message } = parseJsonErrorResponse('{"message":"Bad request"}', 400);
// 'Bad request'

Cache System

Pluggable cache with layered store support. Two-level abstraction:

  • Cache — high-level interface clients call (read-through with factory + request coalescing)
  • CacheStore — low-level interface consumers implement for custom backends (simple async get/set/invalidate/clear)

Cache Interface

interface Cache {
  get<T>(key: string, ttlMs: number, factory: () => Promise<T>, options?: CacheGetOptions): Promise<T>;
  invalidate(prefix: string): void;
  clear(): void;
}

interface CacheGetOptions {
  persist?: boolean;                         // If false, skip persistent stores (memory only). Default: true
  shouldCache?: (data: unknown) => boolean;  // If provided, only store when predicate returns true. Default: always store
}

shouldCache predicate: When provided, the factory result is checked before storing. If the predicate returns false, the data is still returned to the caller but not stored in any cache layer. The next call triggers a fresh factory invocation. All three client packages use this internally to prevent caching failed Result<T> objects — consumers don't need to configure it.

CacheStore Interface

interface CacheStore {
  get<T>(key: string): Promise<T | undefined>;
  set<T>(key: string, data: T, ttlMs: number): Promise<void>;
  invalidate(prefix: string): Promise<void>;
  clear(): Promise<void>;
  readonly persistent: boolean;
}

TTLCache

In-memory cache implementing Cache with request coalescing. When multiple concurrent callers request the same expired key, only one factory call is made.

import { TTLCache } from '@markwharton/api-core';

const cache = new TTLCache();
const data = await cache.get('employees', 5 * 60_000, () => fetchEmployees());
cache.invalidate('timesheet:');
cache.clear();

MemoryCacheStore

Simple in-memory CacheStore for use as L1 in LayeredCache. Marked as non-persistent.

LayeredCache

Compositor that chains multiple CacheStore instances into a single Cache. Stores are checked in order; on a hit at level N, earlier stores are backfilled. Factory is only called on all-miss.

import { LayeredCache, MemoryCacheStore } from '@markwharton/api-core';
import type { CacheStore } from '@markwharton/api-core';

// Consumer implements CacheStore for persistent backend
class TableStoreCacheStore implements CacheStore {
  readonly persistent = true;
  async get<T>(key: string): Promise<T | undefined> { /* ... */ }
  async set<T>(key: string, data: T, ttlMs: number): Promise<void> { /* ... */ }
  async invalidate(prefix: string): Promise<void> { /* ... */ }
  async clear(): Promise<void> { /* ... */ }
}

const cache = new LayeredCache([
  new MemoryCacheStore(),           // L1: fast, non-persistent
  new TableStoreCacheStore(client), // L2: survives cold starts
]);

Data Flow

How data moves through a two-layer cache (e.g., MemoryCacheStore L1 + persistent L2):

| Scenario | L1 (Memory) | L2 (Persistent) | Factory called? | |----------|------------|----------------|----------------| | Warm hit | Hit | Skipped | No | | Cold start, within TTL | Miss | Hit → backfill L1 | No | | Expired everywhere | Miss | Miss | Yes → write both | | API error | Miss | Miss | Yes → write neither | | Restricted (persist: false) | Hit/Miss | Skipped entirely | Only if L1 miss |

Data sensitivity: Pass { persist: false } to skip persistent stores for restricted data:

cache.get('employees:pii', 300_000, fetchPiiData, { persist: false });

Serialization caveat: Persistent CacheStore implementations must handle JSON serialization. Types containing Map (e.g., LPWorkspaceTree) are not JSON-serializable — consumers must handle conversion in their CacheStore.set/get.

fetchWithRetry

Automatic retry on HTTP 429 and 503 with exponential backoff. Respects the Retry-After header.

import { fetchWithRetry } from '@markwharton/api-core';

const response = await fetchWithRetry(url, init, {
  retry: { maxRetries: 3, initialDelayMs: 1000, maxDelayMs: 10000 },
  onRetry: ({ attempt, delayMs, status }) => {
    console.log(`Retry ${attempt} after ${delayMs}ms (HTTP ${status})`);
  },
});

batchMap

Map over items with bounded concurrency.

import { batchMap } from '@markwharton/api-core';

const results = await batchMap(items, 5, async (item) => fetchItem(item.id));

getErrorMessage

Extract a safe error message from any error type.

import { getErrorMessage } from '@markwharton/api-core';

try { ... } catch (err) {
  console.error(getErrorMessage(err));
}

Client Helpers

Shared patterns extracted from all three API clients.

| Function | Parameters | Returns | Description | |----------|------------|---------|-------------| | cachedResult<T> | cache, key, ttlMs, factory, options? | Promise<Result<T>> | Route through cache, skipping failed Results | | fetchAndParseResponse<T> | doFetch, parse, parseError | Promise<Result<T>> | Fetch + response check + error parsing | | resolveClientCache | config: { cache?, cacheInstance? } | Cache \| undefined | Create cache instance from client config |

License

MIT