npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

rrq-ts

v0.11.1

Published

RRQ TypeScript producer and runner runtime.

Readme

RRQ TypeScript

npm License

TypeScript/JavaScript client for RRQ, the distributed job queue with a Rust-powered orchestrator.

What is RRQ?

RRQ (Reliable Redis Queue) is a distributed job queue that separates the hard parts (scheduling, retries, locking, timeouts) into a Rust orchestrator while letting you write job handlers in your preferred language. It uses Redis as the source of truth with atomic operations for reliability.

Why RRQ?

  • Language flexibility - Write job handlers in TypeScript, Python, or Rust
  • Rust orchestrator - The complex distributed systems logic is handled by battle-tested Rust code
  • Production features built in - Retries, dead letter queues, timeouts, cron scheduling, distributed tracing
  • Redis simplicity - No separate databases; everything lives in Redis with predictable semantics

This Package

This package provides:

  • Producer client - Enqueue jobs from Node.js/Bun applications
  • Runner runtime - Execute job handlers written in TypeScript

Works with Node.js 20+ and Bun.

Quick Start

1. Install

npm install rrq-ts
# or
bun add rrq-ts

2. Enqueue jobs (Producer)

import { RRQClient } from "rrq-ts";

const client = new RRQClient({
  config: { redisDsn: "redis://localhost:6379/0" },
});

const jobId = await client.enqueue("send_email", {
  params: { to: "[email protected]", template: "welcome" },
  queueName: "emails",
  maxRetries: 5,
});

console.log(`Enqueued job: ${jobId}`);
await client.close();

3. Write job handlers (Runner)

import { RunnerRuntime, Registry } from "rrq-ts";

const registry = new Registry();

registry.register("send_email", async (request) => {
  const { to, template } = request.params;

  // Your email sending logic here
  await sendEmail(to, template);

  return { sent: true, to };
});

const runtime = new RunnerRuntime(registry);
// RRQ launches runners with: --tcp-socket host:port
await runtime.runFromArgs();

4. Configure (rrq.toml)

[rrq]
redis_dsn = "redis://localhost:6379/0"
default_runner_name = "node"

[rrq.runners.node]
type = "socket"
cmd = ["node", "dist/runner.js"]
tcp_port = 9000
pool_size = 4
max_in_flight = 10

5. Run

# Install the RRQ orchestrator
cargo install rrq
# or download from releases

# Start the orchestrator (spawns runners automatically)
rrq worker run --config rrq.toml

Producer API

Enqueue Options

interface EnqueueOptions {
  params?: Record<string, unknown>; // Job parameters
  queueName?: string; // Target queue (default: "default")
  jobId?: string; // Custom job ID
  maxRetries?: number; // Max retry attempts
  jobTimeoutSeconds?: number; // Execution timeout
  resultTtlSeconds?: number; // How long to keep results
  enqueueTime?: Date; // Explicit enqueue timestamp
  deferUntil?: Date; // Schedule for specific time
  deferBySeconds?: number; // Delay execution
  traceContext?: Record<string, string>; // Distributed tracing
}

Producer Config

interface ProducerConfig {
  redisDsn: string;
  queueName?: string;
  maxRetries?: number;
  jobTimeoutSeconds?: number;
  resultTtlSeconds?: number;
  idempotencyTtlSeconds?: number;
  correlationMappings?: Record<string, string>; // e.g. { tenant_id: "params.tenant.id" }
}

Unique Jobs (Idempotency)

const jobId = await client.enqueueWithUniqueKey(
  "process_order",
  "order-123", // unique key
  { params: { orderId: "123" } },
);

Rate Limiting

const jobId = await client.enqueueWithRateLimit("sync_user", {
  params: { userId: "456" },
  rateLimitKey: "user-456",
  rateLimitSeconds: 60,
});

if (jobId === null) {
  console.log("Rate limited");
}

Debouncing

await client.enqueueWithDebounce("save_document", {
  params: { docId: "789" },
  debounceKey: "doc-789",
  debounceSeconds: 5,
});

Job Status

const status = await client.getJobStatus(jobId);
console.log(status);

Runner API

Handler Signature

type Handler = (
  request: ExecutionRequest,
  signal: AbortSignal,
) => Promise<ExecutionOutcome | unknown> | ExecutionOutcome | unknown;

Cancellation Behavior

  • Handlers receive an AbortSignal.
  • Runner cancel requests and deadline timeouts abort that signal.
  • Pass the signal to downstream APIs (fetch, database clients, etc.) so in-flight work stops promptly.
  • Keep handlers idempotent and retry-safe for libraries that do not support cancellation.

Execution Request

interface ExecutionRequest {
  protocol_version: string;
  job_id: string;
  request_id: string;
  function_name: string;
  params: Record<string, unknown>;
  context: {
    job_id: string;
    attempt: number;
    queue_name: string;
    enqueue_time: string;
    deadline?: string | null;
    trace_context?: Record<string, string> | null;
    correlation_context?: Record<string, string> | null;
    worker_id?: string | null;
  };
}

Outcome Types

// Success
const success: ExecutionOutcome = {
  job_id: jobId,
  request_id: requestId,
  status: "success",
  result: { result: "data" },
};

// Failure (may be retried)
const failure: ExecutionOutcome = {
  job_id: jobId,
  request_id: requestId,
  status: "error",
  error: { message: "Something went wrong" },
};

// Explicit retry after delay
const retry: ExecutionOutcome = {
  job_id: jobId,
  request_id: requestId,
  status: "retry",
  error: { message: "Rate limited" },
  retry_after_seconds: 60,
};

OpenTelemetry

import { RunnerRuntime, Registry, OtelTelemetry } from "rrq-ts";

const runtime = new RunnerRuntime(registry, new OtelTelemetry());
// RRQ launches runners with: --tcp-socket host:port
await runtime.runFromArgs();

Producer FFI Setup

The producer uses a Rust FFI library for consistent behavior across languages. The library is loaded from:

  1. RRQ_PRODUCER_LIB_PATH environment variable
  2. rrq-ts/bin/<platform>-<arch>/ (for published packages)
  3. rrq-ts/bin/ (for development)

Build from source:

cargo build -p rrq-producer --release
# Copy to bin/darwin-arm64/ or bin/linux-x64/ as appropriate

Related Packages

| Package | Language | Purpose | | ----------------------------------------------------- | ---------- | -------------------------------- | | rrq-ts | TypeScript | Producer + runner (this package) | | rrq | Python | Producer + runner | | rrq | Rust | Orchestrator binary | | rrq-producer | Rust | Native producer | | rrq-runner | Rust | Native runner |

Requirements

  • Node.js 20+ or Bun
  • Redis 5.0+
  • RRQ orchestrator binary

License

MIT