@glitch_protocol/auth-adapter-redis
v0.2.0
Published
Redis adapter implementation of CacheAdapter for glitch_protocol
Maintainers
Readme
@glitch_protocol/auth-adapter-redis
Redis implementation of CacheAdapter for distributed caching, refresh token lookup, and distributed cleanup locks.
Install
npm install @glitch_protocol/auth-adapter-redis ioredisUsage
1. Create a Redis client
import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL!);2. Create the adapter
import { RedisCacheAdapter } from "@glitch_protocol/auth-adapter-redis";
const cacheAdapter = new RedisCacheAdapter(redis);3. Use with engine
import { glitch_protocolEngine } from "@glitch_protocol/auth-server";
import { DrizzleAdapter } from "@glitch_protocol/auth-adapter-drizzle";
const engine = new glitch_protocolEngine(
new DrizzleAdapter(db, { users, sessions, refreshTokens }),
cacheAdapter,
socketBroadcaster,
{ jwtSecret: process.env.JWT_SECRET! },
);API
Constructor
new RedisCacheAdapter(redis: Redis)Example with connection options:
const redis = new Redis({
host: "localhost",
port: 6379,
password: process.env.REDIS_PASSWORD,
db: 0,
retryStrategy: (times) => Math.min(times * 50, 2000),
enableReadyCheck: false,
enableOfflineQueue: false,
});
const cacheAdapter = new RedisCacheAdapter(redis);Methods
All methods implement CacheAdapter from @glitch_protocol/auth-core.
get
async get(key: string): Promise<string | null>Retrieves a value from cache. Returns null if not found or expired.
set
async set(key: string, value: string, ttlSeconds: number): Promise<void>Sets a value with TTL (time-to-live in seconds). Uses Redis SETEX.
delete
async delete(key: string): Promise<void>Deletes a key from cache.
deleteMany
async deleteMany(keys: string[]): Promise<void>Deletes multiple keys in a single Redis call.
acquireLock
async acquireLock(key: string, ttlSeconds: number): Promise<boolean>Acquires a distributed lock. Returns true if acquired, false if held by another instance.
Implementation: Redis SET NX EX (atomic). Used for the cleanup job — only one instance can hold the lock at a time.
Example:
const lockAcquired = await cacheAdapter.acquireLock(
"cleanup:sessions:lock",
840, // 14 minutes
);
if (lockAcquired) {
try {
await cleanupExpiredSessions();
} finally {
await cacheAdapter.releaseLock("cleanup:sessions:lock");
}
}releaseLock
async releaseLock(key: string): Promise<void>Releases a lock. Safe to call even if lock not held.
isAvailable
async isAvailable(): Promise<boolean>Checks if Redis is connected and responsive. Used by glitch_protocolEngine to decide if cache is usable.
Graceful Degradation
All cache operations swallow errors. If Redis is unavailable, the system continues in DB-only mode:
- Cache hits become cache misses (slower queries but no crashes)
- Cleanup jobs run on every instance (not distributed-locked, but still works)
- Rate limiting becomes per-instance instead of global
- Sessions don't have the fast-path check, falling back to DB queries
You don't need to handle errors. If Redis fails mid-operation, the adapter silently swallows it, and the next operation tries again.
Example — Redis goes down:
// Redis is unavailable
const isAvailable = await cacheAdapter.isAvailable(); // Returns false
const value = await cacheAdapter.get("some:key"); // Returns null (not found)
// Engine detects Redis is down and falls back to DB-only operations
// Everything still works, just slowerRedis Key Patterns
The engine uses these Redis keys (from REDIS_KEYS constants):
| Key | TTL | Purpose |
|-----|-----|---------|
| user:{userId} | 3600s | Session list cache for user. |
| session:{sessionId}:active | 300s | Fast-path check if session is active. |
| sessions:active:{userId} | 300s | Cached list of active sessions for user. |
| rt:hash:{hashedToken} | 2592000s | Cached refresh token record (30 days). |
| heartbeat:{sessionId} | 25s | Debounce flag for activity updates. |
| cleanup:sessions:lock | 840s | Distributed lock for cleanup job (14 min). |
You don't need to interact with these directly. The engine manages them.
Multi-Instance Scaling
With Redis and the cleanup lock, you can run N instances of your Express app:
# Instance 1
PORT=4001 node server.js
# Instance 2
PORT=4002 node server.js
# Instance 3
PORT=4003 node server.js- Each instance has its own DB connection pool (e.g., 5 connections)
- Only one instance at a time runs the cleanup job (distributed lock)
- Session cache is shared across instances via Redis
- Refresh token cache is shared across instances via Redis
Configuration Tips
Connection Pooling
For 3-10 instances, a single Redis instance is fine. For 10+ instances, consider Redis Cluster or Sentinel:
import * as Cluster from "ioredis";
const redis = new Cluster([
{ host: "redis-1", port: 6379 },
{ host: "redis-2", port: 6379 },
{ host: "redis-3", port: 6379 },
]);
const cacheAdapter = new RedisCacheAdapter(redis);Memory Management
Redis stores:
- Session lists (1KB per user with 10 devices)
- Refresh tokens (256 bytes hashed + metadata)
- Heartbeat flags (1 byte per active session)
For 10,000 active users with 5 devices each: ~50MB. Redis default max memory is 128MB or more — should be fine.
Monitor with:
redis.info("memory").then(console.log);Persistence
For development, use in-memory only (fast, data loss on restart is OK):
const redis = new Redis({
host: "localhost",
port: 6379,
db: 1, // Use a separate DB for dev
});For production, enable persistence:
# redis.conf
save 900 1 # Save if 1 key changed in 900s
save 300 10 # Save if 10 keys changed in 300s
appendonly yes # AOF (append-only file) for durabilityPeer Dependencies
ioredis: ^5.0.0— Redis client for Node.js
Node Version
ESM-only. Requires Node.js >=18.
