@walsys/cloudflare_worker-db_selector
v1.0.4
Published
Database selector and failover library for Cloudflare Workers with Hyperdrive support
Downloads
458
Maintainers
Readme
Database Selector for Cloudflare Workers
A production-ready database selector and failover library for Cloudflare Workers with Hyperdrive support. Automatically selects between primary and secondary database connections based on health checks, with intelligent caching and failover capabilities.
Features
- ✅ Automatic Database Selection - Health check-based database selection
- 🚀 Intelligent Caching - 60-second cache using Cloudflare Cache API
- 🔄 Automatic Failover - Seamless failover to secondary database on failure
- 🛡️ High Availability - Ensures continuous operation during database issues
- 📊 Health Check Integration - Works with external health monitoring endpoints
- 🔍 Logging Support - Optional structured logging integration
- 🎯 Simple API - Three main functions for all use cases
- ⚡ Zero Dependencies - Pure Cloudflare Workers implementation
- 🏗️ Hyperdrive Native - Built specifically for Cloudflare Hyperdrive
Installation
npm install @walsys/cloudflare_worker-db_selectorOr with Yarn:
yarn add @walsys/cloudflare_worker-db_selectorQuick Start
Basic Usage
import { dbSelect, configureHealthEndpoint } from '@walsys/cloudflare_worker-db_selector';
import postgres from 'postgres';
import { drizzle } from 'drizzle-orm/postgres-js';
export default {
async fetch(request, env, ctx) {
// Configure the health check endpoint (do this once at startup)
configureHealthEndpoint(env.HEALTH_CHECK_URL);
// Select the appropriate database
const selectedHyperdrive = await dbSelect(
env.HYPERDRIVE_PRIMARY, // Primary database
env.HYPERDRIVE_SECONDARY // Secondary database
);
// Use the selected database
const client = postgres(selectedHyperdrive.connectionString);
const db = drizzle(client);
// Your database operations here
const result = await db.select().from(users);
return Response.json(result);
}
};With Failover
import { dbSelect, failover, configureHealthEndpoint } from '@walsys/cloudflare_worker-db_selector';
import postgres from 'postgres';
export default {
async fetch(request, env, ctx) {
// Configure health endpoint
configureHealthEndpoint(env.HEALTH_CHECK_URL);
let currentSelection = 'PRIMARY';
let selectedHyperdrive;
try {
// Select database
selectedHyperdrive = await dbSelect(env.HYPERDRIVE_PRIMARY, env.HYPERDRIVE_SECONDARY);
// Determine which was selected
currentSelection = selectedHyperdrive.connectionString === env.HYPERDRIVE_PRIMARY.connectionString
? 'PRIMARY'
: 'SECONDARY';
const client = postgres(selectedHyperdrive.connectionString);
// Perform database operation
const result = await client`SELECT * FROM users`;
return Response.json(result);
} catch (error) {
// Attempt failover if using primary database
if (currentSelection === 'PRIMARY') {
const failoverDb = await failover(
env.HYPERDRIVE_PRIMARY,
env.HYPERDRIVE_SECONDARY,
currentSelection,
error
);
const client = postgres(failoverDb.connectionString);
const result = await client`SELECT * FROM users`;
return Response.json(result);
}
throw error; // Both databases failed
}
}
};With Logging
import { dbSelect, configureHealthEndpoint } from '@walsys/cloudflare_worker-db_selector';
import { GELFLogger } from '@walsys/cloudflare_worker-gelf_logger';
export default {
async fetch(request, env, ctx) {
configureHealthEndpoint(env.HEALTH_CHECK_URL);
const logger = new GELFLogger({ env, request });
const log = logger; // or GELFLogger.current if using logger.run()
const selectedHyperdrive = await dbSelect(
env.HYPERDRIVE_PRIMARY,
env.HYPERDRIVE_SECONDARY,
log // Pass logger for structured logging
);
// Logs will include:
// - Cache hits/misses
// - Database selection decisions
// - Failover events
// - Error details
ctx.waitUntil(logger.flush());
return new Response('OK');
}
};API Reference
configureHealthEndpoint(url)
Configures the health check endpoint URL. Must be called before using dbSelect().
Parameters:
url(string): The health check endpoint URL (e.g.,https://health.example.com/db_select)
Throws: Error if url is not a valid string
Example:
import { configureHealthEndpoint } from '@walsys/cloudflare_worker-db_selector';
configureHealthEndpoint('https://health.example.com/db_select');dbSelect(primaryDb, secondaryDb, log?)
Selects the appropriate Hyperdrive database connection based on health check.
Parameters:
primaryDb(HyperdriveBinding): Primary Hyperdrive bindingsecondaryDb(HyperdriveBinding): Secondary Hyperdrive bindinglog(Object, optional): Logger instance withdebug,info,errormethods
Returns: Promise<HyperdriveBinding> - The selected Hyperdrive connection
Throws: Error if health check returns FAULT or invalid response, or if health endpoint not configured
Behavior:
- Validates health endpoint is configured
- Checks Cloudflare Cache API for cached health status (60s TTL)
- If cache miss, fetches from configured health endpoint
- Validates response (must be
PRIMARY,SECONDARY, orFAULT) - Returns appropriate database connection
- Throws error if both databases are unavailable (FAULT)
failover(primaryDb, secondaryDb, currentSelection, error, log?)
Handles database failover when primary DB fails.
Parameters:
primaryDb(HyperdriveBinding): Primary Hyperdrive bindingsecondaryDb(HyperdriveBinding): Secondary Hyperdrive bindingcurrentSelection(DbSelection): Current DB selection that failed ('PRIMARY' or 'SECONDARY')error(Error): The error that triggered the failoverlog(Object, optional): Logger instance
Returns: Promise<HyperdriveBinding> - The secondary Hyperdrive connection
Throws: Error if already using secondary database (no further failover possible)
Behavior:
- Invalidates the health check cache
- Logs failover event with error details
- If current selection is PRIMARY, returns SECONDARY database
- If current selection is SECONDARY, throws error (cannot failover further)
invalidateCache(log?)
Manually invalidates the database selection cache.
Parameters:
log(Object, optional): Logger instance
Returns: Promise<boolean> - True if cache was deleted, false otherwise
Use Cases:
- After manually fixing database issues
- During maintenance operations
- When you need to force a fresh health check
- For testing and debugging
getCachedSelection(log?)
Gets the current selection from cache without making a health check request.
Parameters:
log(Object, optional): Logger instance
Returns: Promise<DbSelection|null> - The cached selection or null if not cached
Use Cases:
- Monitoring and observability
- Debugging cache behavior
- Health check dashboards
Health Check Endpoint
The library requires you to configure a health check endpoint URL using configureHealthEndpoint(). This endpoint must return one of three text values:
PRIMARY- Use the primary databaseSECONDARY- Use the secondary databaseFAULT- Both databases are unavailable
Example Health Check Implementation
export default {
async fetch(request, env, ctx) {
try {
// Check primary database
const primaryHealthy = await checkDatabase(env.HYPERDRIVE_PRIMARY);
if (primaryHealthy) {
return new Response('PRIMARY', {
headers: { 'Content-Type': 'text/plain' }
});
}
// Check secondary database
const secondaryHealthy = await checkDatabase(env.HYPERDRIVE_SECONDARY);
if (secondaryHealthy) {
return new Response('SECONDARY', {
headers: { 'Content-Type': 'text/plain' }
});
}
// Both unhealthy
return new Response('FAULT', {
headers: { 'Content-Type': 'text/plain' },
status: 503
});
} catch (error) {
return new Response('FAULT', {
headers: { 'Content-Type': 'text/plain' },
status: 500
});
}
}
};
async function checkDatabase(hyperdrive) {
try {
const client = postgres(hyperdrive.connectionString, {
connect_timeout: 5,
max: 1
});
await client`SELECT 1`;
await client.end();
return true;
} catch {
return false;
}
}Environment Variables
Required Environment Variables
Set these in your wrangler.toml or wrangler.jsonc:
[vars]
HEALTH_CHECK_URL = "https://health.example.com/db_select"Required Hyperdrive Bindings
In your wrangler.toml or wrangler.jsonc:
[[hyperdrive]]
binding = "HYPERDRIVE_PRIMARY"
id = "your-primary-hyperdrive-id"
[[hyperdrive]]
binding = "HYPERDRIVE_SECONDARY"
id = "your-secondary-hyperdrive-id"Advanced Usage
Queue Handler with Failover
import { dbSelect, failover, configureHealthEndpoint } from '@walsys/cloudflare_worker-db_selector';
import { GELFLogger } from '@walsys/cloudflare_worker-gelf_logger';
export default {
async queue(batch, env, ctx) {
configureHealthEndpoint(env.HEALTH_CHECK_URL);
const logger = new GELFLogger({ env });
return logger.run(async () => {
const log = GELFLogger.current;
let currentSelection = 'PRIMARY';
let selectedHyperdrive;
try {
// Select database
selectedHyperdrive = await dbSelect(
env.HYPERDRIVE_PRIMARY,
env.HYPERDRIVE_SECONDARY,
log
);
currentSelection = selectedHyperdrive.connectionString === env.HYPERDRIVE_PRIMARY.connectionString
? 'PRIMARY'
: 'SECONDARY';
} catch (error) {
log.error('Database selection failed, retrying batch', { error: error.message });
for (const message of batch.messages) {
message.retry();
}
return;
}
const client = postgres(selectedHyperdrive.connectionString);
try {
// Process batch
for (const message of batch.messages) {
await processMessage(client, message);
message.ack();
}
} catch (error) {
// Attempt failover
if (currentSelection === 'PRIMARY') {
try {
const failoverDb = await failover(
env.HYPERDRIVE_PRIMARY,
env.HYPERDRIVE_SECONDARY,
currentSelection,
error,
log
);
const failoverClient = postgres(failoverDb.connectionString);
// Retry batch with failover database
for (const message of batch.messages) {
await processMessage(failoverClient, message);
message.ack();
}
await failoverClient.end();
} catch (failoverError) {
log.error('Failover failed, retrying batch', { error: failoverError.message });
for (const message of batch.messages) {
message.retry();
}
}
} else {
// Already using secondary, retry messages
for (const message of batch.messages) {
message.retry();
}
}
} finally {
await client.end();
ctx.waitUntil(log.flush());
}
});
}
};RPC Worker with State Tracking
import { WorkerEntrypoint } from 'cloudflare:workers';
import { dbSelect, configureHealthEndpoint } from '@walsys/cloudflare_worker-db_selector';
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
export default class extends WorkerEntrypoint {
async getSeedData(vin) {
configureHealthEndpoint(this.env.HEALTH_CHECK_URL);
const selectedHyperdrive = await dbSelect(
this.env.HYPERDRIVE_PRIMARY,
this.env.HYPERDRIVE_SECONDARY
);
const client = postgres(selectedHyperdrive.connectionString);
const db = drizzle(client);
const data = await db.select()
.from(telemetry)
.where(eq(telemetry.vin, vin));
await client.end();
return data;
}
}Cache Behavior
The library implements a 60-second cache TTL for health check results:
- First Request: Fetches from health endpoint, stores in Cloudflare Cache API
- Subsequent Requests (within 60s): Returns cached result instantly
- After 60s: Cache expires, next request fetches fresh health status
- On Failover: Cache is immediately invalidated to force fresh check
This design provides:
- Low Latency: Most requests use cached result (microsecond response)
- Fresh Data: Regular health check updates every 60 seconds
- Failure Recovery: Immediate cache invalidation on errors
Error Handling
The library throws descriptive errors for different scenarios:
try {
const db = await dbSelect(env.HYPERDRIVE_PRIMARY, env.HYPERDRIVE_SECONDARY);
} catch (error) {
if (error.message.includes('FAULT')) {
// Both databases are unavailable
return new Response('Service Unavailable', { status: 503 });
} else if (error.message.includes('Invalid health check response')) {
// Health endpoint returned unexpected value
return new Response('Configuration Error', { status: 500 });
} else if (error.message.includes('Health endpoint returned')) {
// Health endpoint is down or unreachable
return new Response('Health Check Failed', { status: 500 });
}
throw error;
}Best Practices
- Always Use with try/catch: Health checks can fail, handle errors gracefully
- Pass Logger Instance: Enables debugging and monitoring
- Track Current Selection: Store which DB is in use for failover logic
- Clean Up Connections: Always close database connections in finally blocks
- Monitor Cache Hit Rate: Use logging to track cache effectiveness
- Test Failover Scenarios: Regularly test failover logic in staging
- Set Connection Timeouts: Use short timeouts for health checks (3-5s)
TypeScript Support
The library includes JSDoc type definitions for excellent IDE support:
import type { HyperdriveBinding, DbSelection } from '@walsys/cloudflare_worker-db_selector';
import { configureHealthEndpoint } from '@walsys/cloudflare_worker-db_selector';
configureHealthEndpoint('https://health.example.com/db_select');
// Types are inferred automatically
const selectedDb = await dbSelect(env.HYPERDRIVE_PRIMARY, env.HYPERDRIVE_SECONDARY);
// selectedDb is typed as HyperdriveBindingPerformance
- Cache Hit: ~0.1ms (in-memory lookup)
- Cache Miss: ~50-100ms (health endpoint fetch + cache write)
- Failover: ~100-150ms (cache clear + secondary connection)
Compatibility
- Cloudflare Workers: All runtime versions
- Cloudflare Pages Functions: Full support
- Node.js: Not supported (Cloudflare Workers only)
- Hyperdrive: Required for database connections
Troubleshooting
Cache Not Working
Symptoms: Every request fetches from health endpoint
Solutions:
- Check Cache API is enabled in your Cloudflare account
- Verify health endpoint returns proper Cache-Control headers
- Ensure you're using
caches.default(not custom cache)
Failover Not Triggering
Symptoms: Errors not switching to secondary database
Solutions:
- Verify
currentSelectiontracking is correct - Check error is being caught at the right level
- Ensure secondary database binding is configured
Health Endpoint Unreachable
Symptoms: All requests fail with health check errors
Solutions:
- Verify health endpoint URL is correct and accessible
- Check network connectivity from Workers
- Implement fallback logic for health check failures
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT License - see LICENSE file for details
Support
- Issues: GitHub Issues
- Documentation: GitHub README
Related Packages
- @walsys/cloudflare_worker-gelf_logger - GELF logging for Cloudflare Workers
- drizzle-orm - TypeScript ORM for SQL databases
- postgres - PostgreSQL client for Node.js and Workers
