jobslite
v5.4.0
Published
Lightweight Node.js distributed job queue with PostgreSQL persistence - zero dependencies, no Redis
Maintainers
Readme
jobslite
A pure Node.js distributed job queue — zero dependencies, no Redis required.
What Is This?
jobslite is a pure Node.js distributed job queue. No Redis, no external message brokers — just TCP sockets and in-memory state with optional PostgreSQL for persistence.
The Value Proposition:
- Minimal dependencies - No Redis or message brokers; optional PostgreSQL for durability
- Simple deployment - Just Node.js processes + optional PostgreSQL
- BullMQ-compatible API - Familiar interface for easy migration
- Pure TCP - Direct socket connections, no Redis protocol overhead
- Production-ready persistence - PostgreSQL-backed WAL (20k jobs/sec, 94% of BullMQ)
- High availability - Built-in peer discovery with automatic leader election
Perfect for applications that need background job processing without the operational complexity of Redis or other message brokers.
When to Use This
Perfect for:
- Apps already using PostgreSQL (no Redis to manage)
- High-throughput background jobs (20k+ jobs/sec)
- Docker/Kubernetes deployments (built-in HA via peer discovery)
- Teams wanting simpler infrastructure (Node + Postgres vs Node + Redis)
Consider alternatives if:
- You need Redis for other purposes anyway (use BullMQ)
- You require millions of queued jobs (use Kafka/RabbitMQ)
- You need mature Redis-specific features (rate limiting, lua scripting)
Quick Start
1. Install
npm install jobslite
# Optional: for PostgreSQL persistence
npm install pgRequires Node.js 18 or higher.
2. Start the Broker
const { Broker } = require('jobslite');
const broker = new Broker({
port: 6354,
persistence: 'postgresql://user:pass@localhost/jobslite', // or './data/queue.wal'
});
await broker.start();
console.log('Broker ready on port 6354');3. Add Jobs (Producer)
const { Queue } = require('jobslite');
const queue = new Queue('email-queue', {
connection: { host: '127.0.0.1', port: 6354 },
});
await queue.connect();
await queue.add('send-welcome', { to: '[email protected]' });
await queue.add('send-reset', { to: '[email protected]' }, { priority: 10 });4. Process Jobs (Worker)
const { Worker } = require('jobslite');
const worker = new Worker('email-queue', async (job) => {
console.log(`Processing ${job.name}`, job.data);
await job.updateProgress(50);
// ... send email ...
return { sent: true };
}, {
connection: { host: '127.0.0.1', port: 6354 },
concurrency: 5,
});
await worker.connect();See examples/README.md for more examples including TypeScript, peer discovery, and advanced features.
Performance
jobslite achieves 20,061 jobs/sec with PostgreSQL persistence (94.1% of BullMQ's Redis performance):
| Metric | jobslite 4.1.2 | BullMQ/Redis | Comparison | |--------|---------------|--------------|------------| | Throughput | 20,061 jobs/sec | 21,318 jobs/sec | 94.1% | | Persistence | PostgreSQL | Redis AOF | Similar durability | | Data loss window | ~1 second | ~1 second | Equivalent | | Infrastructure | PostgreSQL only | Redis required | Simpler deployment |
Key features:
- Fire-and-forget job additions with client-side UUIDs (no ACK waiting)
- O(1) worker assignment lookups (per-queue availability tracking)
- PostgreSQL dual-table WAL (UNLOGGED hot + logged durable tables)
- Batched writes with 900ms checkpoint interval (matches Redis appendfsync)
Benchmark: large-influx scenario (300k jobs) with PostgreSQL persistence on Docker
Development mode: File-based persistence (no fsync) achieves ~100k jobs/sec for fast local development without database.
Architecture
Single-Node Deployment
┌─────────────┐ TCP ┌─────────────┐ TCP ┌─────────────┐
│ Producer 1 │────────────▶│ │◀────────────│ Worker 1 │
└─────────────┘ │ │ │ concurrency:3│
│ BROKER │ └─────────────┘
┌─────────────┐ TCP │ (central │ TCP ┌─────────────┐
│ Producer 2 │────────────▶│ queue │◀────────────│ Worker 2 │
└─────────────┘ │ manager) │ │ concurrency:3│
│ │ └─────────────┘
│ port 6354 │ TCP ┌─────────────┐
│ │◀────────────│ Worker N │
└─────────────┘ └─────────────┘Broker — single process that holds all job state, assigns work to workers, manages retries/delays/priorities. Optionally persists to disk via an append-only log (WAL).
Queue (producer) — connects to broker over TCP to add and manage jobs.
Worker (consumer) — connects to broker over TCP, receives jobs, runs your processor function, reports results.
High Availability Deployment (Production)
For production HA deployments with automatic failover, jobslite supports DNS-based peer discovery:
Docker/Kubernetes Service DNS: "jobslite.default.svc.cluster.local"
│
│ DNS Resolution
▼
┌───────────────────────────┐
│ Peer IPs: [10.0.1.2, │
│ 10.0.1.3, │
│ 10.0.1.4] │
└───────────────────────────┘
│
┌───────────┴───────────┐
│ Leader Election │
│ (Lowest IP wins) │
└───────────┬───────────┘
│
┬───────────┴───────────┬───────────┐
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Replica 1 │ │ Replica 2 │ │ Replica 3 │
│ 10.0.1.2 │ │ 10.0.1.3 │ │ 10.0.1.4 │
│ │ │ │ │ │
│ 🏆 LEADER │ │ FOLLOWER │ │ FOLLOWER │
│ (Broker) │◀─────│ (Worker) │ │ (Worker) │
│ │◀─────┼──────────────┼───│ │
└──────────────┘ └──────────────┘ └──────────────┘
│
│ Shared WAL (optional)
▼
PostgreSQL or
Shared VolumeKey features:
- DNS-based leader election (lowest IP wins)
- Automatic failover when leader disconnects
- Leader runs embedded Broker, followers become Workers
- Shared persistence (PostgreSQL or shared volume) for zero data loss
See ARCHITECTURE.md for detailed protocol specs, durability guarantees, and performance characteristics.
Features
| Feature | Status |
|---|---|
| Named queues | Supported |
| Job priority | Supported - Higher number = higher priority |
| Delayed jobs | Supported - { delay: 5000 } |
| Retries with backoff | Supported - Fixed or exponential |
| Concurrency control | Supported - Per-worker limit |
| Multi-worker distribution | Supported - Via TCP |
| Job progress tracking | Supported - job.updateProgress(n) |
| Event system | Supported - added, active, completed, failed, retrying, stalled |
| Stall recovery | Supported - Re-queues jobs when workers disconnect |
| Disk persistence | Supported - PostgreSQL or file-based WAL |
| Bulk add | Supported - queue.addBulk([...]) |
| Job removal | Supported - queue.remove(id) |
| Queue counts | Supported - queue.getJobCounts() |
| Auto-reconnect | Supported - Exponential backoff |
| Graceful shutdown | Supported - Waits for active jobs |
| Job timeout | Supported - Broker-enforced via stall checker |
Job Options
await queue.add('job-name', { payload: 'data' }, {
jobId: 'custom-id', // Custom job ID for deduplication (optional)
priority: 0, // Higher = processed first
delay: 0, // ms before job becomes available
attempts: 1, // Total attempts (1 = no retry)
backoff: {
type: 'exponential', // 'fixed' | 'exponential'
delay: 1000, // Base delay in ms
},
timeout: 30_000, // Job timeout in ms (0 = no limit)
removeOnComplete: false,
removeOnFail: false,
});Job Deduplication
Prevent duplicate jobs by providing a custom jobId:
await queue.add('process-asset', { id: 123 }, {
jobId: 'asset-123' // Only one job with this ID will exist
});
// Adding again with same jobId replaces the previous job
await queue.add('process-asset', { id: 123, priority: 'high' }, {
jobId: 'asset-123' // Overwrites with new data
});Common patterns: debouncing rapid updates, singleton jobs, idempotent processing. See Job Deduplication Patterns for detailed examples.
BullMQ Migration Guide
The API is intentionally similar. Here's what changes:
Before (BullMQ)
import { Queue, Worker } from 'bullmq';
const queue = new Queue('my-queue', {
connection: { host: 'redis-host', port: 6379 },
});
const worker = new Worker('my-queue', async (job) => {
// process
}, {
connection: { host: 'redis-host', port: 6379 },
concurrency: 5,
});After (jobslite)
const { Broker, Queue, Worker } = require('jobslite');
// NEW: you need to run a broker somewhere
const broker = new Broker({ port: 6354, persistence: './data/queue.wal' });
await broker.start();
// Same API, different connection target
const queue = new Queue('my-queue', {
connection: { host: 'broker-host', port: 6354 },
});
await queue.connect(); // NEW: explicit connect
const worker = new Worker('my-queue', async (job) => {
// same processor function — job.data, job.name, job.id all work
await job.updateProgress(50); // same API
return result;
}, {
connection: { host: 'broker-host', port: 6354 },
concurrency: 5,
});
await worker.connect(); // NEW: explicit connectKey Differences
| | BullMQ | jobslite |
|---|---|---|
| Requires | Redis server | Broker process (pure Node.js) |
| Connection | Implicit via Redis | Explicit await .connect() |
| Multi-machine | Via Redis | Via TCP to broker |
| Persistence | Redis (always) | Optional WAL file |
| Dashboard | Bull Board | Events API (build your own) |
What to search/replace
import { Queue, Worker } from 'bullmq'→const { Queue, Worker } = require('jobslite')- Add
await queue.connect()/await worker.connect()after construction - Change connection config from Redis
{ host, port }to broker{ host, port } - Start a Broker process (or embed it in your main process)
Unchanged
queue.add(name, data, opts)— samequeue.addBulk([...])— same- Worker processor signature
async (job) => {}— same job.data,job.name,job.id— samejob.updateProgress(n)— sameworker.on('completed', ...)— sameworker.on('failed', ...)— sameworker.close()— same- Job options:
priority,delay,attempts,backoff— same
Advanced Patterns
jobslite supports common job queue patterns used in production applications:
Job Deduplication with Custom IDs
- Job deduplication with custom
jobIdfor batching and coalescing - Delayed jobs for batching windows
- Concurrency control (serial queues with
concurrency: 1, parallel with higher) - Job lifecycle options (
removeOnComplete,removeOnFail,attempts) - Priority for critical jobs
Notification Batching Pattern
A common pattern is batching notifications using delayed jobs with custom jobId. This prevents notification spam by coalescing multiple events:
async function scheduleNotification(entityId: string, recipientId: string) {
const jobId = `notification-${entityId}-${recipientId}`;
await notificationQueue.add('send-notification', {
entityId,
recipientId,
// ... other data
}, {
jobId, // Custom ID for deduplication
delay: 300000, // 5 minutes (300s)
removeOnComplete: true,
});
}How it works: Adding a job with an existing jobId replaces the old job and resets the delay timer. Multiple rapid events → one notification after 5 minutes from the last event.
Migration from BullMQ
Install jobslite:
npm install jobsliteSetup PostgreSQL persistence (optional):
import { setupPostgresWAL } from 'jobslite'; await setupPostgresWAL(process.env.DATABASE_URL);Replace BullMQ imports:
// Before import { Queue, Worker } from 'bullmq'; // After import { Queue, Worker, Broker } from 'jobslite';Start broker process (or embed in main app):
import { Broker, PostgresPersistence } from 'jobslite'; const broker = new Broker({ port: 5555, persistence: new PostgresPersistence(process.env.DATABASE_URL), }); await broker.start();Update connection config:
// Before (BullMQ with Redis) const queue = new Queue('notifications', { connection: { host: 'redis', port: 6379 } }); // After (jobslite with Broker) const queue = new Queue('notifications', { host: 'broker', port: 5555 });Remove Redis dependency from
docker-compose.ymlandpackage.json
That's it! All existing job logic, notification batching, and queue patterns work without code changes.
Deployment Patterns
Single machine, multiple worker processes
# Terminal 1: Broker
node broker.js
# Terminal 2-N: Workers (as many replicas as you want)
node worker.js
node worker.js
node worker.js
# Any process: Producer
node producer.jsEmbedded broker (broker + worker in same process)
const { Broker, Queue, Worker } = require('jobslite');
const broker = new Broker({ port: 6354 });
await broker.start();
const queue = new Queue('jobs', { connection: { port: 6354 } });
await queue.connect();
const worker = new Worker('jobs', processor, {
connection: { port: 6354 },
concurrency: 4,
});
await worker.connect();Docker / Kubernetes
Run the broker as a Service, workers as a Deployment with N replicas:
# broker
apiVersion: v1
kind: Service
metadata:
name: queue-broker
spec:
ports:
- port: 6354
# workers
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 5
template:
spec:
containers:
- env:
- name: BROKER_HOST
value: queue-broker
- name: BROKER_PORT
value: "6354"Persistence
jobslite supports two persistence strategies:
PostgreSQL (Recommended for Production):
- Dual-table WAL: UNLOGGED hot table + logged durable table
- Checkpoint every 900ms (matches Redis appendfsync everysec)
- 20k+ jobs/sec with proper crash recovery
- Works reliably across Docker, bind mounts, and networked filesystems
File-based (Development/Benchmarking):
- Simple append-only log for fast local development
- ~100k jobs/sec without fsync overhead
- Not recommended for production (no durability guarantees)
On restart, the broker replays the WAL to reconstruct state. Active jobs are restored to waiting state.
See ARCHITECTURE.md for detailed durability trade-offs and data loss scenarios.
Deployment
Production deployment patterns:
- Docker Compose - Multi-replica setup with peer discovery
- Kubernetes - Deployment with headless service for DNS discovery
- Standalone - Single broker with systemd or Docker
See DEPLOYMENT.md for complete deployment guides including Docker, Kubernetes, peer discovery configuration, and high availability setups.
Documentation
- API Reference - Complete API documentation for all classes and methods
- Architecture Deep Dive - How the system works internally, protocol specs, reliability guarantees
- Deployment Guide - Docker Compose, Kubernetes, scaling strategies, production deployment
- Development Guide - Development setup, testing, and contribution workflow
- Contributing - Code review process, roadmap, and guidelines
Contributing
We welcome contributions!
- Issues: Report bugs and request features on GitHub Issues
- Pull Requests: See CONTRIBUTING.md for development setup and guidelines
- Questions: Open a GitHub Discussion
Limitations
- Single broker — the broker is a single point of coordination (same as Redis in BullMQ). For HA, use peer discovery with multiple replicas in Docker/Kubernetes (see DEPLOYMENT.md).
- In-memory state — all jobs live in broker memory. For millions of jobs, ensure adequate RAM.
- No cron expressions — repeatable jobs use
{ every: ms }. Cron support could be added with a small parser. - No rate limiting — can be added as middleware in your processor function.
License
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0-only) - see the LICENSE file for details.
