@markwharton/api-core
v1.8.1
Published
Shared utilities for API client packages
Readme
@markwharton/api-core
Shared utilities for API client packages.
Install
npm install @markwharton/api-coreAPI Reference
ApiError
Base error class for API client libraries. Provides status, rawResponse, and a static fromResponse() factory. Library-specific error classes (LPError, PayrollError, HRError) extend this.
import { ApiError } from '@markwharton/api-core';
// Use directly
throw new ApiError('Not found', 404, { rawResponse: body });
// Create from HTTP response
const error = ApiError.fromResponse(statusCode, responseText);
console.log(error.message, error.status);
// Extend for library-specific errors
class MyError extends ApiError {
constructor(message: string, status: number) {
super(message, status);
this.name = 'MyError';
}
}ClientConfig
Configuration interface shared by all API client libraries. Each client extends this with API-specific required fields.
import type { ClientConfig, OnRequestCallback } from '@markwharton/api-core';
interface MyClientConfig extends ClientConfig {
apiToken: string; // API-specific required fields
}| Field | Type | Description |
|-------|------|-------------|
| baseUrl | string? | Override the default API base URL |
| onRequest | OnRequestCallback? | Callback invoked before each request (for logging/debugging) |
| retry | RetryConfig? | Retry configuration for 429/503 responses |
| cacheInstance | Cache? | Custom cache backend (e.g., LayeredCache with persistent stores) |
OnRequestCallback signature: (info: { method: string; url: string; description?: string }) => void
RateLimiter
Sliding window rate limiter. Enforces a maximum number of requests per time window. Callers that exceed the limit are queued and released when capacity opens.
import { RateLimiter } from '@markwharton/api-core';
const limiter = new RateLimiter(5); // 5 requests per second (default window: 1000ms)
// Acquire a token before each request — resolves immediately if under the limit
await limiter.acquire();
await fetch(url);| Method | Parameters | Returns | Description |
|--------|------------|---------|-------------|
| constructor | maxRequests: number, windowMs?: number | - | Create limiter (default window: 1000ms) |
| acquire | - | Promise<void> | Acquire token. Resolves immediately or queues until capacity opens |
How queuing works
acquire() tracks timestamps of recent requests in a sliding window. If fewer than maxRequests timestamps fall within the window, it adds a timestamp and resolves immediately. If the window is full, the returned Promise is added to an internal queue — the caller simply awaits longer. No 429 is returned, no retry happens; the delay is transparent.
How queued requests clear
When a request is queued, scheduleFlush() sets a setTimeout for when the oldest timestamp will exit the sliding window (windowMs - elapsed + 1ms). When the timer fires, expired timestamps are pruned and queued promises are resolved up to maxRequests. If requests remain in the queue, another flush is scheduled. This repeats until the queue drains.
Two-layer protection
RateLimiter is the first layer — it throttles outbound requests to prevent 429s. fetchWithRetry is the second layer — a backstop that retries with exponential backoff if a 429 comes back anyway (e.g., multiple serverless instances sharing a global API limit). In PayrollClient, the flow is: acquire() → fetch → if 429, fetchWithRetry handles backoff.
Per-instance state: RateLimiter uses an in-memory sliding window. Each serverless instance has its own independent limiter. See Known Limitations in the root README.
pickFields
Allowlist-based field selection for API responses. Only fields present in keys are included. Missing fields are omitted (not set to undefined).
import { pickFields } from '@markwharton/api-core';
const raw = { id: 1, name: 'Alice', secret: 'hidden' };
const safe = pickFields<{ id: number; name: string }>(raw, ['id', 'name']);
// { id: 1, name: 'Alice' }resolveRetryConfig
Resolve a partial retry config into a fully-specified config with standard defaults. Returns undefined when config is not provided (retry disabled).
import { resolveRetryConfig } from '@markwharton/api-core';
const config = resolveRetryConfig({});
// { maxRetries: 3, initialDelayMs: 1000, maxDelayMs: 10000 }
const disabled = resolveRetryConfig(undefined);
// undefinedparseJsonErrorResponse
Parse a JSON error response body into a human-readable message. Handles { message }, { error }, and plain text formats.
import { parseJsonErrorResponse } from '@markwharton/api-core';
const { message } = parseJsonErrorResponse('{"message":"Bad request"}', 400);
// 'Bad request'Cache System
Pluggable cache with layered store support. Two-level abstraction:
Cache— high-level interface clients call (read-through with factory + request coalescing)CacheStore— low-level interface consumers implement for custom backends (simple async get/set/invalidate/clear)
Cache Interface
interface Cache {
get<T>(key: string, ttlMs: number, factory: () => Promise<T>, options?: CacheGetOptions): Promise<T>;
invalidate(prefix: string): void;
clear(): void;
}
interface CacheGetOptions {
persist?: boolean; // If false, skip persistent stores (memory only). Default: true
shouldCache?: (data: unknown) => boolean; // If provided, only store when predicate returns true. Default: always store
}shouldCache predicate: When provided, the factory result is checked before storing. If the predicate returns false, the data is still returned to the caller but not stored in any cache layer. The next call triggers a fresh factory invocation. All three client packages use this internally to prevent caching failed Result<T> objects — consumers don't need to configure it.
CacheStore Interface
interface CacheStore {
get<T>(key: string): Promise<T | undefined>;
set<T>(key: string, data: T, ttlMs: number): Promise<void>;
invalidate(prefix: string): Promise<void>;
clear(): Promise<void>;
readonly persistent: boolean;
}TTLCache
In-memory cache implementing Cache with request coalescing. When multiple concurrent callers request the same expired key, only one factory call is made.
import { TTLCache } from '@markwharton/api-core';
const cache = new TTLCache();
const data = await cache.get('employees', 5 * 60_000, () => fetchEmployees());
cache.invalidate('timesheet:');
cache.clear();MemoryCacheStore
Simple in-memory CacheStore for use as L1 in LayeredCache. Marked as non-persistent.
LayeredCache
Compositor that chains multiple CacheStore instances into a single Cache. Stores are checked in order; on a hit at level N, earlier stores are backfilled. Factory is only called on all-miss.
import { LayeredCache, MemoryCacheStore } from '@markwharton/api-core';
import type { CacheStore } from '@markwharton/api-core';
// Consumer implements CacheStore for persistent backend
class TableStoreCacheStore implements CacheStore {
readonly persistent = true;
async get<T>(key: string): Promise<T | undefined> { /* ... */ }
async set<T>(key: string, data: T, ttlMs: number): Promise<void> { /* ... */ }
async invalidate(prefix: string): Promise<void> { /* ... */ }
async clear(): Promise<void> { /* ... */ }
}
const cache = new LayeredCache([
new MemoryCacheStore(), // L1: fast, non-persistent
new TableStoreCacheStore(client), // L2: survives cold starts
]);Data Flow
How data moves through a two-layer cache (e.g., MemoryCacheStore L1 + persistent L2):
| Scenario | L1 (Memory) | L2 (Persistent) | Factory called? |
|----------|------------|----------------|----------------|
| Warm hit | Hit | Skipped | No |
| Cold start, within TTL | Miss | Hit → backfill L1 | No |
| Expired everywhere | Miss | Miss | Yes → write both |
| API error | Miss | Miss | Yes → write neither |
| Restricted (persist: false) | Hit/Miss | Skipped entirely | Only if L1 miss |
Data sensitivity: Pass { persist: false } to skip persistent stores for restricted data:
cache.get('employees:pii', 300_000, fetchPiiData, { persist: false });Serialization caveat: Persistent CacheStore implementations must handle JSON serialization. Types containing Map (e.g., LPWorkspaceTree) are not JSON-serializable — consumers must handle conversion in their CacheStore.set/get.
fetchWithRetry
Automatic retry on HTTP 429 and 503 with exponential backoff. Respects the Retry-After header.
import { fetchWithRetry } from '@markwharton/api-core';
const response = await fetchWithRetry(url, init, {
retry: { maxRetries: 3, initialDelayMs: 1000, maxDelayMs: 10000 },
onRetry: ({ attempt, delayMs, status }) => {
console.log(`Retry ${attempt} after ${delayMs}ms (HTTP ${status})`);
},
});batchMap
Map over items with bounded concurrency.
import { batchMap } from '@markwharton/api-core';
const results = await batchMap(items, 5, async (item) => fetchItem(item.id));getErrorMessage
Extract a safe error message from any error type.
import { getErrorMessage } from '@markwharton/api-core';
try { ... } catch (err) {
console.error(getErrorMessage(err));
}Client Helpers
Shared patterns extracted from all three API clients.
| Function | Parameters | Returns | Description |
|----------|------------|---------|-------------|
| cachedResult<T> | cache, key, ttlMs, factory, options? | Promise<Result<T>> | Route through cache, skipping failed Results |
| fetchAndParseResponse<T> | doFetch, parse, parseError | Promise<Result<T>> | Fetch + response check + error parsing |
| resolveClientCache | config: { cache?, cacheInstance? } | Cache \| undefined | Create cache instance from client config |
License
MIT
