nest-leader-election
v1.1.2
Published
Distributed leader election for NestJS
Maintainers
Readme
NestJS Leader Election
Distributed leader election for NestJS applications using TypeORM and PostgreSQL.
Problem Statement
In distributed systems and clustered environments, the following challenges often arise:
Resource Conflicts Multiple application instances may simultaneously attempt to:
- Execute periodic tasks (cron jobs)
- Modify shared data
- Send duplicate notifications
Execution Reliability
- No guarantee tasks will complete if any node fails
- Risk of data corruption with concurrent access
Resource Efficiency
- Redundant resource consumption from duplicate operations
- Inability to balance stateful operations
Implementation Complexity
- Requires low-level work with locks and transactions
- No standardized way to manage leader lifecycle
How This Library Helps:
- ✅ Ensures single executor for critical operations
- ✅ Provides automatic leadership failover during failures
- ✅ Prevents concurrent access to shared resources
- ✅ Offers ready-to-use abstractions for NestJS applications
- ✅ Solves split-brain via database atomic operations
Typical Use Cases:
- Executing periodic tasks (DB migrations, email campaigns)
- Coordinating distributed transactions
- Managing access to exclusive resources
- Orchestrating background processes in Kubernetes clusters
Features
- 🚀 NestJS DI Integration
- 🛡 Automatic Lease Renewal
- 🔄 Cluster and Horizontal Scaling Support
- ⚡️ Split-Brain Protection via Advisory Locks
- 🧩 Ready-to-Use Controller Decorators
- 📦 Standalone Mode for Non-NestJS Usage
Installation
npm install nest-leader-election @nestjs/core @nestjs/typeorm typeorm pg reflect-metadataQuick Start
- Import Module
// app.module.ts
import { Module } from '@nestjs/common';
import { TypeOrmModule } from '@nestjs/typeorm';
import { LeaderElectorModule } from 'nest-leader-election';
@Module({
imports: [
TypeOrmModule.forRoot({
type: 'postgres',
host: 'localhost',
port: 5432,
username: 'postgres',
password: 'postgres',
database: 'test',
autoLoadEntities: true,
}),
LeaderElectorModule.forRoot({
schema: 'schema_name',
}),
],
})
export class AppModule {}- Use in Services
// tasks.service.ts
import { Injectable } from '@nestjs/common';
import { LeaderElectorService } from 'nest-leader-election';
@Injectable()
export class TasksService {
constructor(private readonly leaderElector: LeaderElectorService) {}
async performCriticalTask() {
if (this.leaderElector.amILeader()) {
// Logic executed only by the leader
console.log('Performing leader-only task');
}
}
}Configuration
- Module Settings
LeaderElectorModule.forRoot({ leaseDuration: 15000, // Lease duration in ms (default: 10000) renewalInterval: 5000, // Renewal interval (default: 3000) jitterRange: 2000, // Request timing variance (default: 2000) lockId: 12345, // Lock identifier (default: 1) createTableOnInit: false, // if you use migration }) - Migration (use if you use different users for migrations and runtime in typeorm)
import { LeaderElectionMigrationBase } from "nest-leader-election"; class LeaderElectionMigration extends LeaderElectionMigrationBase { schema = "leader_schema"; // default - 'public' name = "leader_election_migration" + Date.now(); // your timestamp here }
Standalone Usage
import { DataSource } from 'typeorm';
import { LeaderElectorCore, LeaderLease } from 'nest-leader-election';
async function bootstrap() {
const dataSource = new DataSource({
type: 'postgres',
// ... configuration
entities: [LeaderLease],
});
await dataSource.initialize();
const elector = new LeaderElectorCore(
dataSource.getRepository(LeaderLease),
{
leaseDuration: 15000,
instanceId: 'my-app-01'
}
);
setInterval(() => {
if (elector.amILeader()) {
console.log('Performing leader task');
}
}, 1000);
}
bootstrap();API
LeaderElectorServiceamILeader(): boolean- Check leadership statusrelease(): Promise<void>- Release leadership
Best Practices
- Always configure
leaseDuration2-3x longer than renewalInterval - Use unique
lockIdfor different services - Monitor
leader_leasetable
Operation logic
- Initialization
Table creation: First, the
leader_leasetable will start to exist. If the table does not exist, it will be created with the following fields:id(lock identifier)leader_id(unique node identifier)expires_at(lease expiration time)created_at(record creation time)Indexes and constraints: An index is created for quick determination by
expires_atand a CHECK constraint that guarantees the correctness of timestamps.
- Lease mechanism
- Leadership capture:
- The node tries to insert a new record with
expires_at = NOW() LeaseDuration. - If the record already exists:
- Proves that the current lease has not expired
(expires_at < NOW()). - If the lease has expired, atomically update the entry, setting its
leader_idand a newexpires_at.
- Renewing Leadership
Periodic update:
The current leader updates the
expires_atof therenewalInterval(±jitter)service to renew the lease.Jitter mechanism:
Random delay (±2 seconds by default) between synchronization attempts, to accommodate requests from different nodes.
- Releasing Leadership
Explicitly calling
release(): Deletes the entry with the currentleader_id.Automatic release: If the leader fails to renew the lease, other nodes automatically take over the leadership via
leaseDuration.
- Cleanup sensitive records
- Background task:
Every
6 × LeaseDuration (± jitter)records are committed whereexpires_at < NOW() - 5 sec. - Goal: Prevent accumulation of "dead" records of standard records.
This algorithm is ideal for:
- 3-node+ clusters
- A system where Kubernetes Leader Election cannot be used
- Scenarios with requirements for atomicity of operations
