johnny-locke
v0.4.0
Published
A robust, strongly-consistent distributed locking library that provides atomic operations across multiple processes
Downloads
4
Maintainers
Readme
Johnny Locke
A robust, strongly-consistent distributed locking library for Node.js that provides atomic operations across multiple processes. Implemented on top of your choice of either Redis or Nats JetStream, it fits into your existing production cluster without additional dependency.
Designed to function like a distributed address space, the library provides wait/notify-like syntax for process synchronization on top of the locked object data. Data safety is guaranteed using fencing tokens to ensure that only the process which holds the lock can ever write or update data.
Features
- Mutual Exclusion: Only one process can hold a lock at any time
- Atomic State Updates: Guarantee data consistency across processes, using fencing tokens to prevent race conditions
- Automatic Timeout: Locks automatically expire if the holding process crashes or otherwise overshoots the lock timeout
- Object stores are configured separately from lock mechanisms and can be expired or persisted indefinitely, allowing them to function as either short-term cache or long-term storage
- Event-Driven: Robust wait/notify-like syntax supported using pub/sub for lock release eventing
- Multiple Backends: Support for both Redis and Nats JetStream under the hood
API Reference
Lock Configuration
interface LockConfiguration {
namespace: string; // Prefix for all keys in the backend
lockTimeoutMs: number; // How long locks are held before auto-expiry
objectExpiryMs?: number; // Optional: How long objects persist after last access
}withLock<T>(key: string, timeoutMs: number, callback: (state: T | null) => Promise<T>): Promise<Readable<T>>
Acquires a lock and executes the callback against the current object state. Automatically releases the lock regardless of success, error or timeout.
The callback is passed the current value of the locked object (or null, if it's a new lock). The value returned by the callback is written back as an atomic update to the locked object.
const result = await lock.withLock<number>('my-key', 1000, async (state) => {
// input is the existing value
const existingValue = state ?? 0
// output written back to the lock store
return existingValue + 1;
});acquireLock<T>(key: string, timeoutMs: number): Promise<Writable<T>>
Manually acquires a lock and returns a writable state handle. Useful for long-running operations.
The timeout provided tells the library how long to wait to acquire the lock, not how long to hold it once required (which is configured globally via LockConfiguration.lockTimeoutMs)
releaseLock<T>(key: string, Writable<T>): Promise<boolean>
Releases the lock on the object, writing the given state back to the lock store. Notifies any waiting processes that the lock is available.
Returns true if the lock was released and the value written, false otherwise (i.e. if the lock expired)
const lockObj = await lock.acquireLock<string>('my-key', 1000);
try {
// Do work while holding the lock
const updated = lockObj.update('updatedState')
// Write and release
await lock.releaseLock('my-key', updated);
} catch (error) {
// release lock without updating data
await lock.releaseLock('my-key', lockObj);
// ...
}tryAcquireLock<T>(key: string): Promise<{acquired: boolean, value: Writable<T> | undefined}>
Attempts to acquire the lock and immediately returns success or failure.
const result = await lock.tryAcquireLock<string>('my-key');
if (!result.acquired) {
// lock not acquired, result.value is undefined
} else {
// lock acquired, result.value is a writable state handle
// be sure to release this lock as shown above
}wait<T>(key: string, timeoutMs: number): Promise<Readable<T>>
Waits for timeoutMs for a lock to become available (or returns immediately if the object is not currently locked), returns its current state as a readonly state handle without acquiring the lock.
const state = await lock.wait('my-key', 1000);
console.log('Current state:', state.value);Implementation Details
Nats JetStream K/V
import { JetstreamDistributedLock } from 'johnny-locke';
import { connect } from 'nats';
const nats = await connect({servers: ['nats://localhost:4222']})
const lock = await JetstreamDistributedLock.create(nats, {
namespace: 'my-app',
lockTimeoutMs: 5000, // Lock expires after 5 seconds
objectExpiryMs: 300000 // Objects expire after 5 minutes (optional)
})
// do your stuff
lock.close()
await nats.close() // nats client is not managed by the instanceThe Nats implementation uses:
- JetStream K/V messages for atomic operations
- Key-value store for lock metadata, revision/seqID validation for fencing tokens
- K/V stream consumer for lock release notifications
- Stream message TTL for object expiry, manual lock timeout enforcement
Jetstream uses RAFT consensus under the hood for stream state consistency, which provides strong CP consistency under network partition (and some fault tolernace for high availability, using 3/5 replicas).
Redis
import { RedisDistributedLock } from 'johnny-locke';
import Redis from 'ioredis';
const redis = new Redis('redis://localhost:6379')
const lock = await RedisDistributedLock.create(redis, {
namespace: 'my-app',
lockTimeoutMs: 5000, // Lock expires after 5 seconds
objectExpiryMs: 300000 // Objects expire after 5 minutes (optional)
});
// do your stuff
lock.close()
await redis.quit() // redis client is not managed by the instanceThe Redis implementation uses:
- Lua scripting for atomic operations
- Order of operations is important here to maintain strong consistency and timeout support even if a server crashes in the middle of a script execution
- Hash structures for lock metadata + fencing tokens
- Separate key for object storage to support separate expiry/persistence
- Redis Pub/Sub for lock release notifications
- Key expiration for both object and lock timeouts
As opposed to Jetstream, Redis replication does not inherently support strong CP consistency because replication is asynchronous by default. Redis 3.0 introduced the WAIT command to enforce synchronous replication to a certain number of replicas before acknowledging a write, but even this doesn't wholly solve the underlying problem, since the replication is not a rigorously CP operation (in the way that RAFT consensus is). The primary server and some replicas may differ from other replicas if the primary crashes in the middle of a WAIT command that has not yet replicated data to all replicas. The resulting state of the lock and the object itself is then indeterminate, since it depends on which replica is promoted to primary, and whether or not it received the replication of that data.
Fortunately, the use of fencing tokens provides consistency despite the above race condition. The fencing tokens are atomic with the lock data itself, so while a process may lose a lock it believes it maintains, that process will not be able to write any data with the lost lock. Mutual exclusion and atomicity are maintained in this cases, which may not be fair but remains strongly consistent and deterministic.
Best Practices
- Keep Lock Times Short: Minimize the duration locks are held to reduce contention
- Use Appropriate Timeouts: Set timeouts based on the expected duration of state updates
- Handle Errors: Always implement proper error handling when manually locking/unlocking (or use
withLockfor convenience) to avoid relying on timeouts - Be Diligent With Resources: Generally you should only need one
DistributedLockinstance per server (per namespace). Always clean up by callingclose()when shutting down your application - Use Namespaces: Isolate application locks to avoid collisions
Running Tests
Spin up a test environment with redis and nats servers using docker compose:
docker compose -f tests/docker-compose.yml up -dand then run the tests:
npm run testLicense
MIT License - see LICENSE file for details
