@waitstate/edge
v0.1.0
Published
WaitState edge middleware — gate, forward, and observe requests at the edge
Downloads
93
Maintainers
Readme
@waitstate/edge
Edge middleware for WaitState - gate, forward, and observe requests at the edge. Zero compute waste on denied requests.
Install
npm install @waitstate/edgeQuick Start
Cloudflare Workers
import { EdgeGate } from '@waitstate/edge';
const gate = new EdgeGate({
publishKey: 'ws_pub_xxx',
secretKey: 'ws_sec_xxx',
origin: 'https://api.myapp.com',
classify: (req) => ({
tag: req.headers.get('x-api-tier') || 'anonymous',
weight: 1,
}),
});
export default {
fetch(request, env, ctx) {
return gate.handle(request, { waitUntil: ctx.waitUntil.bind(ctx) });
},
};Vercel Edge Middleware
import { EdgeGate } from '@waitstate/edge';
const gate = new EdgeGate({
publishKey: process.env.WAITSTATE_PUBLISH_KEY!,
secretKey: process.env.WAITSTATE_SECRET_KEY!,
origin: 'https://api.myapp.com',
classify: (req) => ({
tag: new URL(req.url).pathname.startsWith('/api/v2') ? 'v2' : 'v1',
weight: 1,
}),
});
export default async function middleware(request: Request) {
return gate.handle(request);
}Generic (any edge runtime)
import { EdgeGate } from '@waitstate/edge';
const gate = new EdgeGate({
publishKey: 'ws_pub_xxx',
secretKey: 'ws_sec_xxx',
origin: 'https://api.myapp.com',
classify: () => ({ tag: 'default', weight: 1 }),
});
const response = await gate.handle(request);Configuration
| Option | Type | Required | Default | Description |
|--------|------|----------|---------|-------------|
| publishKey | string | Yes | - | WaitState publish key |
| secretKey | string | Yes | - | WaitState secret key |
| origin | string | Yes | - | Origin URL to forward allowed requests to |
| classify | (req: Request) => ClassifyResult | Yes | - | Extract tag and weight from each request |
| siteId | string | No | - | Site identifier for telemetry sharding |
| baseUrl | string | No | https://api.waitstate.io | WaitState API base URL |
| policyTtl | number | No | 5000 | Policy cache TTL in milliseconds |
| flushInterval | number | No | 5000 | Telemetry batch flush interval in milliseconds |
| onError | (error: Error) => void | No | - | Error callback for debugging |
How It Works
- Classify -
classify(request)extracts a tag and weight from the incoming request - Policy - Fetches and caches the rate-limiting policy from WaitState (TTL-based)
- Gate - Checks the request against the cached policy (kill signal, global block, tag block, weight)
- Forward - Allowed requests are forwarded to the origin with path and query preserved
- Observe - Measures origin latency and detects 5xx errors
- Accumulate - Batches per-tag telemetry (latency, errors, gate counts) in memory
- Report - Flushes aggregated pulse to the control plane on interval via
waitUntil(or fire-and-forget)
Denied requests get a 429 Rate Limited response immediately - no origin call, no compute waste.
The gate is fail-open: if the WaitState API is unreachable, all requests are allowed through.
Documentation
Full docs, API reference, and reflex rule guides: waitstate.io/docs
