@vercel/queue
v0.1.3
Published
A Node.js library for interacting with the Vercel Queue Service API
Downloads
977,473
Readme
Vercel Queues
A TypeScript client library for interacting with the Vercel Queue Service API, designed for seamless integration with Vercel deployments.
Features
- Simple API:
sendandhandleCallbackare all you need for push-based workflows - Automatic Triggering on Vercel: Vercel invokes your route handlers when messages are ready
- Works Anywhere:
sendandreceivework in any Node.js environment, including self-hosted and non-Vercel platforms - Type Safety: Full TypeScript generics support
- Customizable Serialization: Built-in JSON, Buffer, and Stream transports
- Local Dev Mode: Messages sent locally trigger your handlers automatically
Installation
npm install @vercel/queueQuick Start
1. Link your Vercel project and pull credentials:
The SDK authenticates via OIDC. Link your project if you haven't already, then pull to get fresh tokens:
npm i -g vercel
vc link # if you haven't already
vc env pull2. Send a message anywhere in your app:
import { send } from "@vercel/queue";
await send("my-topic", { message: "Hello world" });3. Handle incoming messages with a route handler:
// app/api/queue/my-topic/route.ts
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(async (message, metadata) => {
console.log("Processing:", message);
});4. Configure vercel.json:
{
"functions": {
"app/api/queue/my-topic/route.ts": {
"experimentalTriggers": [{ "type": "queue/v2beta", "topic": "my-topic" }]
}
}
}That's it. The top-level send and handleCallback use an auto-configured default client. The region is auto-detected from VERCEL_REGION (set automatically on Vercel). If the region can't be detected (e.g. local dev), it falls back to iad1.
To target a specific region for sending with the top-level send, pass the region option:
await send("my-topic", payload, { region: "sfo1" });Note: The
regionoption is only available on the top-levelsend()convenience export. When using aQueueClientinstance, the region is set once in the constructor vianew QueueClient({ region: "sfo1" })and applies to all operations on that client.
Local Development
Queues just work locally. When you send() messages in development mode, the library sends them to the real Vercel Queue Service, then invokes your registered handleCallback handlers directly in-process using the same code path as production. Your handlers are called with the same lifecycle (receive, visibility extension, ack) as in production. If a handler throws, the message is re-delivered after the configured retry delay (from retryAfterSeconds in vercel.json or the retry callback's afterSeconds), with an incrementing deliveryCount, matching production retry semantics.
Works with Next.js (Turbopack and webpack), Nuxt, SvelteKit, and any framework that runs server-side JavaScript. The SDK automatically discovers handlers from your vercel.json configuration and loads route modules on demand — no manual setup required beyond vercel.json.
Note: Local dev mode is enabled when
NODE_ENV=development. Most frameworks set this automatically duringnpm run dev.
Publishing Messages
import { send } from "@vercel/queue";
// Simple send
await send("my-topic", { message: "Hello world" });
// With options
await send(
"my-topic",
{ message: "Hello world" },
{
idempotencyKey: "unique-key", // Prevent duplicate messages
retentionSeconds: 3600, // 1 hour TTL (default: 24h)
delaySeconds: 60, // Delay delivery by 1 minute
region: "sfo1", // Top-level send() only — not available on QueueClient.send()
},
);Example usage in an API route:
// app/api/send-message/route.ts
import { send } from "@vercel/queue";
export async function POST(request: Request) {
const body = await request.json();
const { messageId } = await send("my-topic", { message: body.message });
return Response.json({ messageId });
}Note:
messageIdisnullwhen the server accepts the message for deferred processing (e.g. during a server-side outage). The message will still be delivered.
Consuming Messages
On Vercel
On Vercel, messages are consumed using API route handlers that Vercel automatically invokes when messages are available. Use handleCallback or handleNodeCallback to create these route handlers.
Web API — handleCallback
Returns (Request) => Promise<Response>. For frameworks that export Web API route handlers (Next.js App Router, Hono, etc.).
Next.js App Router:
// app/api/queue/my-topic/route.ts
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
// metadata: { messageId, deliveryCount, createdAt, expiresAt, topicName, consumerGroup, region }
await processMessage(message);
// Throwing an error will automatically retry the message
},
{
visibilityTimeoutSeconds: 600, // Lock duration while processing (default: 300)
},
);Nuxt:
// server/api/queue.ts
import { handleCallback } from "@vercel/queue";
const handler = handleCallback(async (message, metadata) => {
await processMessage(message);
});
export default defineEventHandler(async (event) => {
return handler(toWebRequest(event));
});SvelteKit:
// src/routes/api/queue/+server.ts
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(async (message, metadata) => {
await processMessage(message);
});Hono:
import { Hono } from "hono";
import { handleCallback } from "@vercel/queue";
const app = new Hono();
app.post(
"/api/queue",
handleCallback(async (message, metadata) => {
await processMessage(message);
}),
);
export default app;Connect-style — handleNodeCallback
Returns (req, res) => Promise<void>. For frameworks that export Connect-style handlers (Vercel Node.js functions, Express, Next.js Pages Router, etc.). handleNodeCallback is not a top-level export — it is only available via a QueueClient instance:
// lib/queue.ts
import { QueueClient } from "@vercel/queue";
const queue = new QueueClient();
export const { handleNodeCallback } = queue;Vercel Node.js Functions (plain api/ directory):
// api/queue.ts
import { handleNodeCallback } from "./lib/queue";
export default handleNodeCallback(async (message, metadata) => {
await processMessage(message);
});Next.js Pages Router:
// pages/api/queue/my-topic.ts
import { handleNodeCallback } from "@/lib/queue";
export default handleNodeCallback(async (message, metadata) => {
await processMessage(message);
});Express:
import express from "express";
import { handleNodeCallback } from "@/lib/queue";
const app = express();
app.use(express.json());
app.post(
"/api/queue/my-topic",
handleNodeCallback(async (message, metadata) => {
await processMessage(message);
}),
);
export default app;2. Configure vercel.json
Tell Vercel which routes handle which topics:
{
"functions": {
"app/api/queue/my-topic/route.ts": {
"experimentalTriggers": [
{
"type": "queue/v2beta",
"topic": "my-topic",
"retryAfterSeconds": 60,
"initialDelaySeconds": 0
}
]
},
"app/api/queue/orders/fulfillment/route.ts": {
"experimentalTriggers": [
{ "type": "queue/v2beta", "topic": "order-events" }
]
},
"app/api/queue/orders/analytics/route.ts": {
"experimentalTriggers": [
{
"type": "queue/v2beta",
"topic": "order-events",
"retryAfterSeconds": 300
}
]
}
}
}Multiple route files for the same topic create separate consumer groups — each receives a copy of every message.
3. Retry and Backoff
When a handler throws, the message is not acknowledged and becomes available for redelivery after the retryAfterSeconds interval configured in vercel.json. Retries continue until the handler succeeds or the message expires (default: 24 hours).
For finer control over retry timing, pass a retry option. You can also set visibilityTimeoutSeconds to control how long the message is locked during processing (default: 300):
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
visibilityTimeoutSeconds: 600, // Lock duration while processing (default: 300)
retry: (error, metadata) => {
if (error instanceof RateLimitError) return { afterSeconds: 60 };
// Return undefined to let the error propagate normally
},
},
);When retry returns { afterSeconds: N }, the message is rescheduled for redelivery after N seconds. Return { acknowledge: true } to acknowledge the message so it is never retried. When it returns undefined, the error propagates normally and the message is retried at the default interval.
Exponential backoff uses metadata.deliveryCount (starts at 1, increments each delivery):
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
retry: (error, metadata) => {
// 5s → 10s → 20s → 40s → ... capped at 5 min
const delay = Math.min(300, 2 ** metadata.deliveryCount * 5);
return { afterSeconds: delay };
},
},
);Conditional retry — only retry transient errors:
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
retry: (error, metadata) => {
if (error instanceof RateLimitError) return { afterSeconds: 60 };
if (error instanceof TemporaryError) return { afterSeconds: 30 };
// Permanent errors: return undefined → retried at the default interval
},
},
);Acknowledging poison messages — stop retrying messages that can never succeed:
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
retry: (error, metadata) => {
if (error instanceof ValidationError) return { acknowledge: true };
if (metadata.deliveryCount > 5) return { acknowledge: true };
return { afterSeconds: Math.min(300, 2 ** metadata.deliveryCount * 5) };
},
},
);The retry option is available on handleCallback, handleNodeCallback, and receive.
Custom Client Configuration
For most use cases, the top-level send and handleCallback functions are all you need. For advanced configuration (custom transports, explicit tokens, deployment pinning), create a QueueClient instance directly.
QueueClient (push mode)
For push-based workflows where Vercel delivers messages to your route handlers:
import { QueueClient, BufferTransport } from "@vercel/queue";
const queue = new QueueClient({
region: "iad1", // Optional — auto-detected from VERCEL_REGION, falls back to "iad1"
token: "my-token", // Auth token (default: OIDC auto-detection)
transport: new BufferTransport(), // Serialization (default: JsonTransport)
headers: { "X-Custom": "header" }, // Custom headers on all requests
deploymentId: null, // null = unpinned, omit = auto from env, or explicit string
});
export const { send, handleCallback, handleNodeCallback } = queue;PollingQueueClient (poll mode)
For manual polling workflows where you call receive to poll for messages. Works anywhere — on Vercel, self-hosted, or any Node.js environment:
import { PollingQueueClient, BufferTransport } from "@vercel/queue";
const queue = new PollingQueueClient({
region: "iad1", // Required — messages must be received from a fixed region
token: "my-token",
transport: new BufferTransport(),
headers: { "X-Custom": "header" },
deploymentId: null,
});
export const { send, receive } = queue;Both clients send requests to https://${region}.vercel-queue.com. When handleCallback receives a message, it reads the ce-vqsregion header and routes follow-up API calls to the correct regional endpoint.
To customize the URL scheme, provide a resolveBaseUrl that returns a URL:
// Custom domain
const queue = new QueueClient({
resolveBaseUrl: (region) => new URL(`https://${region}.my-proxy.example`),
});
// Custom domain with a base path (e.g. reverse proxy prefix)
const queue = new QueueClient({
resolveBaseUrl: (region) =>
new URL(`https://my-proxy.example/queues/${region}`),
// → requests go to https://my-proxy.example/queues/<region>/api/v3/…
});The SDK always appends its own API path (/api/v3/…) to the returned URL.
Transports
The transport controls how message payloads are serialized and deserialized.
| Use Case | Transport | Memory Usage | Notes |
| --------------- | ----------------- | ------------ | ----------------------- |
| Structured data | JsonTransport | Low | Default, JSON encoding |
| Binary data | BufferTransport | Medium | Raw bytes |
| Large payloads | StreamTransport | Very Low | No buffering, streaming |
import {
QueueClient,
JsonTransport,
BufferTransport,
StreamTransport,
} from "@vercel/queue";
// JSON with custom serialization
const queue = new QueueClient({
transport: new JsonTransport({
replacer: (key, value) => (key === "password" ? undefined : value),
reviver: (key, value) => (key === "date" ? new Date(value) : value),
}),
});
// Binary data
const binQueue = new QueueClient({
transport: new BufferTransport(),
});
await binQueue.send("binary-topic", myBuffer);
// Streaming for large payloads
const streamQueue = new QueueClient({
transport: new StreamTransport(),
});
await streamQueue.send("large-file", myReadableStream);Manual Receive
Use PollingQueueClient to poll for and process messages directly. This is an advanced alternative to handleCallback that works in any Node.js environment, both on and off Vercel.
Region considerations
Messages can only be received from the region they were sent to. When using receive, use a fixed region (e.g. "iad1") for both sending and receiving — do not use VERCEL_REGION, because Vercel may route requests to different regions due to failover or load balancing, distributing your messages across regions unpredictably.
A single region is still highly available — Vercel deploys across 3+ availability zones within each region. If you need multi-region availability, you are responsible for designing your own HA strategy (e.g. sending to multiple regions and receiving from each).
For most use cases on Vercel, handleCallback via QueueClient is the recommended approach — the platform handles region routing automatically and the SDK routes follow-up calls to the correct region via the ce-vqsregion header.
Usage
import { PollingQueueClient } from "@vercel/queue";
const { send, receive } = new PollingQueueClient({ region: "iad1" });
// Send a message
await send("my-topic", { message: "Hello world" });
// Process next available message
const result = await receive(
"my-topic",
"my-group",
async (message, metadata) => {
console.log("Processing:", message);
},
);
if (!result.ok) {
console.log("Queue was empty:", result.reason);
}
// Batch processing: up to 10 messages in one request
await receive("my-topic", "my-group", handler, { limit: 10 });
// Process a specific message by ID
await receive("my-topic", "my-group", handler, { messageId: "msg-123" });Note:
limitandmessageIdare mutually exclusive options. The handler is never called when the queue is empty — checkresult.okinstead.
Error Handling
import {
BadRequestError,
DuplicateMessageError,
ForbiddenError,
InternalServerError,
UnauthorizedError,
send,
} from "@vercel/queue";
try {
await send("my-topic", payload);
} catch (error) {
if (error instanceof UnauthorizedError) {
console.log("Invalid token - refresh authentication");
} else if (error instanceof ForbiddenError) {
console.log("Environment mismatch - check configuration");
} else if (error instanceof BadRequestError) {
console.log("Invalid parameters:", error.message);
} else if (error instanceof DuplicateMessageError) {
console.log("Duplicate message:", error.idempotencyKey);
} else if (error instanceof InternalServerError) {
console.log("Server error - retry with backoff");
}
}All error types:
| Error | Description |
| ------------------------------------ | --------------------------------------------- |
| BadRequestError | Invalid request parameters |
| UnauthorizedError | Authentication failed (invalid/missing token) |
| ForbiddenError | Access denied (wrong environment/project) |
| DuplicateMessageError | Idempotency key already used |
| ConsumerDiscoveryError | Could not reach consumer deployment |
| ConsumerRegistryNotConfiguredError | Project not configured for queues |
| InternalServerError | Unexpected server error |
| InvalidLimitError | Batch limit outside valid range (1-10) |
| MessageNotFoundError | Message doesn't exist or expired |
| MessageNotAvailableError | Message exists but cannot be claimed |
| MessageAlreadyProcessedError | Message already successfully processed |
| MessageLockedError | Message being processed by another consumer |
| MessageCorruptedError | Message data could not be parsed |
| QueueEmptyError | No messages available in queue |
Environment Variables
| Variable | Description | Default |
| ---------------------- | ------------------------------------ | ------- |
| VERCEL_REGION | Current region (auto-set by Vercel) | - |
| VERCEL_QUEUE_DEBUG | Enable debug logging (1 or true) | - |
| VERCEL_DEPLOYMENT_ID | Deployment ID (auto-set by Vercel) | - |
Service Limits & Constraints
Throughput & Storage
| Limit | Value | Notes | | --------------------------- | --------------------- | ----------------------------------- | | Message throughput | 10,000+ msg/sec/topic | Scales horizontally | | Payload size | 100 MB | Smaller messages have lower latency | | Number of topics | Unlimited | No hard limit | | Consumer groups per message | ~4,000 | Per-message limit | | Messages per queue | Unlimited | No hard limit |
Parameter Constraints
Publishing Messages
| Parameter | Default | Min | Max | Notes |
| ------------------ | ------------ | --- | ----------- | ----------------------------------- |
| retentionSeconds | 86,400 (24h) | 60 | 86,400 | Message TTL |
| delaySeconds | 0 | 0 | ≤ retention | Cannot exceed retention |
| idempotencyKey | — | — | — | Dedup window: min(retention, 24h) |
Receiving Messages
| Parameter | Default | Min | Max | Notes |
| -------------------------- | ------- | --- | ----- | ------------------------------- |
| visibilityTimeoutSeconds | 300 | 30 | 3,600 | Lock duration during processing |
| limit | 1 | 1 | 10 | Messages per request |
Identifier Formats
| Identifier | Pattern | Example |
| -------------- | ---------------- | ----------------------------------- |
| Topic name | [A-Za-z0-9_-]+ | my-queue, task_queue_v2 |
| Consumer group | [A-Za-z0-9_-]+ | worker-1, analytics_consumer |
| Message ID | Opaque string | 0-1, 3-7K9mNpQrS |
| Receipt handle | Opaque string | Used for acknowledge/visibility ops |
Wildcard Topics
{
"functions": {
"app/api/queue/route.ts": {
"experimentalTriggers": [{ "type": "queue/v2beta", "topic": "user-*" }]
}
}
}*may only appear once in the pattern*must be at the end of the topic name- Valid:
user-*,orders-* - Invalid:
*-events,user-*-data
API Reference
Top-level send(topicName, payload, options?)
The simplest way to send a message. Uses an auto-configured default client that detects the region from VERCEL_REGION, falling back to "iad1" with a console warning on first use.
import { send } from "@vercel/queue";
const { messageId } = await send("my-topic", payload, {
idempotencyKey: "unique-key", // Dedup window: min(retention, 24h)
retentionSeconds: 3600, // Message TTL (default: 86400)
delaySeconds: 60, // Delay before visible (default: 0)
headers: { "X-Custom": "val" }, // Custom headers
region: "sfo1", // Override the auto-detected region for this send
});The region option is exclusive to the top-level send(). It creates a one-off client targeting the given region. When using QueueClient, the region is set once in the constructor and applies to all operations.
Returns { messageId: string | null }. messageId is null when the server accepted the message for deferred processing (e.g. during a server-side outage).
Top-level handleCallback(handler, options?)
The simplest way to handle incoming queue messages. Uses the same auto-configured default client as send.
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
visibilityTimeoutSeconds: 300, // Lock duration (default: 300)
retry: (error, metadata) => {
// Optional: return { afterSeconds: N } to reschedule, { acknowledge: true } to ack, or undefined to propagate
},
},
);Returns (request: Request) => Promise<Response> — for frameworks that export Web API route handlers. Vercel only. The region for follow-up API calls is determined automatically from the ce-vqsregion header in the incoming event.
QueueClient
Push-based client for workflows where Vercel delivers messages to your route handlers. Use this when you need custom configuration (transport, token, headers, deployment pinning). Region is auto-detected from VERCEL_REGION (set automatically on Vercel), falling back to "iad1" with a console warning.
import { QueueClient } from "@vercel/queue";
const queue = new QueueClient({
region: "iad1", // Optional — auto-detected from VERCEL_REGION, falls back to "iad1"
resolveBaseUrl: (r) => new URL(`https://${r}.vercel-queue.com`), // Default resolver
token: "my-token", // Auto-fetched via OIDC if omitted
headers: { "X-Custom": "value" },
transport: new JsonTransport(), // Default: JsonTransport
deploymentId: undefined, // omit = auto from env (pinned), null = unpinned, or explicit string
});
// Methods (arrow functions — safe to destructure)
const { send, handleCallback, handleNodeCallback } = queue;PollingQueueClient
Poll-based client for manually receiving messages. Works in any Node.js environment, including self-hosted and non-Vercel platforms. Region is required to ensure send and receive target the same endpoint.
import { PollingQueueClient } from "@vercel/queue";
const queue = new PollingQueueClient({
region: "iad1", // Required — use a fixed region for polling
// ... same options as QueueClient (except region is required)
});
// Methods (arrow functions — safe to destructure)
const { send, receive } = queue;receive(topicName, consumerGroup, handler, options?)
Available on PollingQueueClient only.
Returns a discriminated result: { ok: true } on success, or { ok: false, reason } when no message was processed. The handler is never called when the queue is empty.
For receive-by-id, operational errors are returned instead of thrown:
const result = await receive("my-topic", "my-group", handler, {
messageId: "msg-123",
});
if (!result.ok) {
// result.reason is "not_found" | "not_available" | "already_processed"
console.log(result.reason, result.messageId);
}// Batch mode
const result = await receive("my-topic", "my-group", handler, {
limit: 10, // Max messages (default: 1, max: 10)
visibilityTimeoutSeconds: 60, // Lock duration (default: 300)
});handleNodeCallback(handler, options?)
Available on QueueClient instances only (not a top-level export). Vercel only.
Returns (req, res) => Promise<void> — for frameworks that export Connect-style handlers.
import { QueueClient } from "@vercel/queue";
const queue = new QueueClient();
const { handleNodeCallback } = queue;
// pages/api/queue/my-topic.ts
export default handleNodeCallback(
async (message, metadata) => {
await processMessage(message);
},
{
retry: (error, metadata) => ({ afterSeconds: 60 }),
},
);Handler Signature
type MessageHandler<T> = (
message: T,
metadata: MessageMetadata,
) => Promise<void> | void;
interface MessageMetadata {
messageId: string;
deliveryCount: number;
createdAt: Date;
expiresAt: Date;
topicName: string;
consumerGroup: string;
region: string;
}License
MIT
