@gl-life/gl-life-database
v1.2.0
Published
Enhanced gl-life-data wrapper with CloudForge features - Type-safe database abstraction layer with multi-tenancy, caching, migrations, and Cloudflare Workers support
Maintainers
Readme
@gl-life/gl-life-database
Enhanced gl-life-data wrapper with CloudForge features - Type-safe database abstraction layer with multi-tenancy, caching, migrations, and Cloudflare Workers support.
About
@gl-life/gl-life-database is a comprehensive database abstraction layer that extends gl-life-data with enterprise features for multi-tenant SaaS applications. Built with TypeScript and designed for type safety, it supports both cloud-native deployments (Cloudflare Workers) and enterprise bring-your-own-database (BYOD) scenarios with SQL Server, MySQL, PostgreSQL, Oracle, and more. Perfect for pro users who want to leverage their existing infrastructure while gaining powerful features like Flex tables, multi-tenancy, and intelligent caching.
Key Features
- 🗄️ Type-Safe Query Builder - Fluent API with full TypeScript support extending gl-life-data
- 🔀 Flex Table System - Dynamic schema without ALTER TABLE, metadata-driven field mapping
- 🏢 Enterprise Database Support - Full support for SQL Server, MySQL, PostgreSQL, Oracle, and more
- 👥 Multi-Tenancy - Built-in tenant isolation with Row-Level Security (RLS)
- ⚡ Flexible Caching - Memory cache, Redis, or Cloudflare KV backends
- 🔄 Migrations - Version-controlled database migrations with rollback support
- 📡 Data Streaming - Memory-efficient streaming for large datasets with backpressure control
- ☁️ Cloud + On-Premise - Deploy on Cloudflare Workers OR bring your own infrastructure
- 🔒 SQL Injection Protection - Parameterized queries with automatic sanitization
- 🧪 Well-Tested - >95% test coverage with comprehensive security tests
- 📊 Performance Optimized - Query building overhead <1ms (P95)
Part of GoodLife Sargam
This package is part of the GoodLife Sargam ecosystem - a comprehensive suite of 80 packages for building modern cloud applications. Visit gl.life for more information.
Installation
npm install @gl-life/gl-life-databaseDependencies
The package has minimal dependencies:
@gl-life/gl-life-core^1.0.1 - Foundation utilities (Result, Option, Logger)gl-life-datalatest - Base database packagezod^4.2.1 - Runtime type validation
Quick Start
Basic Usage
import {
ConnectionManager,
TypeSafeQueryBuilder,
Logger
} from '@gl-life/gl-life-database';
// Create a connection manager
const logger = new Logger({ level: 'info' });
const connectionManager = new ConnectionManager(logger);
// Connect to database
const connection = await connectionManager.connect({
type: 'sqlite',
filename: './myapp.db'
});
// Build and execute a query
const builder = new TypeSafeQueryBuilder('users', null, logger);
const sql = builder
.select(['id', 'name', 'email'])
.where('status', '=', 'active')
.orderBy('created_at', 'DESC')
.limit(10)
.toSQL();
const result = await connection.execute(sql);Multi-Tenant Application
import {
TenantContext,
QueryWrapper,
Logger
} from '@gl-life/gl-life-database';
// Set up tenant context
const logger = new Logger({ level: 'info' });
const tenantContext = new TenantContext(logger);
// Set current tenant
tenantContext.setTenant({
tenantId: 'tenant-123',
isAdmin: false
});
// Queries automatically include tenant_id filtering
const wrapper = new QueryWrapper(tenantContext, logger);
const users = await wrapper.query('users', {
where: { status: 'active' }
});
// Automatically adds: WHERE tenant_id = 'tenant-123' AND status = 'active'Caching
import {
MemoryCache,
CacheConfig,
Logger
} from '@gl-life/gl-life-database';
const config: CacheConfig = {
enabled: true,
backend: 'MEMORY',
maxItems: 1000,
defaultTTL: 300,
evictionPolicy: 'LRU',
invalidationStrategy: 'TIME_BASED'
};
const logger = new Logger({ level: 'info' });
const cache = new MemoryCache(config, logger);
// Cache query results
await cache.set('user:123', { id: 123, name: 'Alice' }, 60);
const user = await cache.get('user:123');
// Pattern-based invalidation
await cache.invalidatePattern('user:*');Database Migrations
import {
MigrationPlanner,
MigrationConfig,
Logger
} from '@gl-life/gl-life-database';
const config: MigrationConfig = {
enabled: true,
migrationTable: 'migrations',
autoRun: false
};
const logger = new Logger({ level: 'info' });
const planner = new MigrationPlanner(config, logger);
// Load migrations from directory
const migrations = [
{
version: '001',
name: 'create_users_table',
up: 'CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)',
down: 'DROP TABLE users'
}
];
await planner.loadMigrations(migrations);
// Plan migration execution
const plan = planner.planMigration('001');
// Execute: await executor.executeMigration(plan);Cloudflare D1 Integration
import { D1Adapter, Logger } from '@gl-life/gl-life-database';
// In Cloudflare Worker
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const logger = new Logger({ level: 'info' });
const adapter = new D1Adapter(env.DB, logger);
const result = await adapter.execute(
'SELECT * FROM users WHERE id = ?',
[123]
);
return new Response(JSON.stringify(result));
}
};Enterprise Database Support (BYOD - Bring Your Own Database)
Pro users can connect to their existing enterprise databases instead of using Cloudflare Workers. Full support for:
SQL Server (Microsoft)
import { ConnectionManager, Logger } from '@gl-life/gl-life-database';
const logger = new Logger({ level: 'info' });
const manager = new ConnectionManager(logger);
const connection = await manager.connect({
type: 'sqlserver',
host: 'your-sql-server.database.windows.net',
port: 1433,
database: 'production_db',
username: 'admin',
password: process.env.DB_PASSWORD,
options: {
encrypt: true, // Azure SQL requires encryption
trustServerCertificate: false,
connectionTimeout: 30000
}
});
// All features work identically: Flex tables, multi-tenancy, caching, migrations
const builder = new TypeSafeQueryBuilder('users', metaDataService, logger);
// ... same as any other databasePostgreSQL
const connection = await manager.connect({
type: 'postgres',
host: 'your-postgres-instance.com',
port: 5432,
database: 'production_db',
username: 'dbuser',
password: process.env.DB_PASSWORD,
options: {
ssl: { rejectUnauthorized: false },
poolSize: 20,
idleTimeoutMillis: 30000
}
});MySQL / MariaDB
const connection = await manager.connect({
type: 'mysql',
host: 'mysql-cluster.internal.com',
port: 3306,
database: 'production_db',
username: 'dbuser',
password: process.env.DB_PASSWORD,
options: {
connectionLimit: 10,
charset: 'utf8mb4',
timezone: 'Z'
}
});Oracle Database
const connection = await manager.connect({
type: 'oracle',
host: 'oracle-enterprise.company.com',
port: 1521,
database: 'ORCL', // Service name
username: 'system',
password: process.env.DB_PASSWORD,
options: {
poolMax: 10,
poolMin: 2,
poolIncrement: 1
}
});SQLite (Development/Testing)
const connection = await manager.connect({
type: 'sqlite',
filename: './dev.db' // Or ':memory:' for in-memory
});Key Benefits for Enterprise Users:
- ✅ Use Existing Infrastructure - No migration required, connect to your current databases
- ✅ All Features Work - Flex tables, multi-tenancy, caching, migrations work on any database
- ✅ Security Compliant - Keep data on-premise, meet regulatory requirements
- ✅ Cost Control - Use your existing database licenses and hardware
- ✅ Hybrid Deployments - Mix cloud (D1) and on-premise databases in same application
- ✅ Connection Pooling - Built-in connection management for all enterprise databases
Flex Table System
The Flex Table System is a powerful feature from gl-life-data that provides dynamic schema capabilities without ALTER TABLE operations. Perfect for multi-tenant SaaS applications where each tenant needs custom fields.
import { TypeSafeQueryBuilder, Logger } from '@gl-life/gl-life-database';
import { MetaDataService } from 'gl-life-data';
// Initialize metadata service
const metaDataService = new MetaDataService('./database.db');
// Create metadata mapping: logical field → physical column
await metaDataService.createMapping({
tableName: 'products',
logicalFieldName: 'productName',
dataType: 'string', // Maps to string_field_1
isRequired: true,
description: 'Product name'
});
await metaDataService.createMapping({
tableName: 'products',
logicalFieldName: 'price',
dataType: 'decimal', // Maps to decimal_field_1
isRequired: true
});
// Query using logical field names
const builder = new TypeSafeQueryBuilder('products', metaDataService, logger);
const sql = builder
.select(['productName', 'price']) // Uses logical names
.where('price', '>', 100)
.toSQL();
// Generates: SELECT string_field_1, decimal_field_1 FROM flex_table WHERE decimal_field_1 > ?Key Benefits:
- Universal Schema: All tables use same physical structure (string_field_1, int_field_1, etc.)
- No ALTER TABLE: Add fields by creating metadata entries
- Type Safety: 10 fields per data type (string, int, decimal, date, boolean, text, datetime, float, bigint, uuid)
- Multi-Tenant Ready: Different tenants can have different custom fields
- Query Transparency: Use logical field names, metadata handles physical mapping
Flex Table Schema:
CREATE TABLE flex_table (
id TEXT PRIMARY KEY,
tenant_id TEXT,
-- 10 string fields
string_field_1 TEXT,
string_field_2 TEXT,
...
string_field_10 TEXT,
-- 10 int fields
int_field_1 INTEGER,
...
int_field_10 INTEGER,
-- Plus: decimal, date, boolean, text, datetime, float, bigint, uuid fields
json_data TEXT, -- Overflow for extra fields
created_at TEXT,
updated_at TEXT
);See examples/06-flex-tables.ts and examples/07-flex-with-multi-tenancy.ts for complete workflows.
Data Streaming
Process large datasets efficiently without loading everything into memory. Perfect for exporting data, batch processing, or handling millions of rows.
import { DatabaseConnectionManager, Logger } from '@gl-life/gl-life-database';
const logger = new Logger({ level: 'info' });
const manager = new DatabaseConnectionManager({ name: 'app_db' }, logger);
await manager.initialize();
const connection = await manager.acquire();
// Stream row-by-row (minimal memory usage)
for await (const user of connection.unwrap().stream(
'SELECT * FROM users WHERE active = ?',
[true],
{ batchSize: 1000 }
)) {
await processUser(user);
}
// Stream in batches for bulk operations
for await (const batch of connection.unwrap().streamBatch(
'SELECT * FROM users',
[],
{ batchSize: 5000, maxMemoryMB: 100 }
)) {
await bulkExport(batch); // Process 5000 rows at a time
}
// Stream with transformation
for await (const email of connection.unwrap().streamWithTransform(
'SELECT * FROM users WHERE age > ?',
[18],
(user) => user.email.toLowerCase(),
{ batchSize: 500 }
)) {
await sendEmail(email);
}
// Advanced control with StreamReader
const reader = connection.unwrap().createStreamReader({
batchSize: 10000,
maxMemoryMB: 200,
backpressure: true
});
await reader.open('SELECT * FROM orders ORDER BY created_at', []);
while (reader.hasMore()) {
const batch = await reader.fetchBatch();
if (batch.isOk()) {
console.log(`Memory: ${reader.getMemoryUsage()} MB`);
console.log(`Throughput: ${reader.stats.rowsPerSecond} rows/sec`);
await processBatch(batch.unwrap());
}
}
await reader.close();Key Benefits:
- ✅ Memory Efficient - Only current batch in memory, not entire dataset
- ✅ Backpressure Control - Automatic pause/resume prevents memory overflow
- ✅ Real-time Stats - Monitor throughput, memory usage, progress
- ✅ Composable Pipelines - Chain filter, map, aggregate processors
- ✅ Error Handling - Automatic cleanup on errors or interruption
See examples/08-streaming.ts for 8 comprehensive examples including CSV export, data pipelines, and performance comparisons.
Usage
Query Builder
The type-safe query builder provides a fluent API for constructing SQL queries:
import { TypeSafeQueryBuilder, Logger } from '@gl-life/gl-life-database';
const logger = new Logger({ level: 'info' });
const builder = new TypeSafeQueryBuilder('orders', null, logger);
// Complex query with joins
const sql = builder
.select(['orders.id', 'orders.total', 'customers.name'])
.join('customers', 'customers.id', '=', 'orders.customer_id')
.where('orders.status', '=', 'completed')
.where('orders.total', '>', 100)
.orderBy('orders.created_at', 'DESC')
.limit(20)
.offset(0)
.toSQL();
// Outputs: SELECT orders.id, orders.total, customers.name
// FROM orders
// JOIN customers ON customers.id = orders.customer_id
// WHERE orders.status = ? AND orders.total > ?
// ORDER BY orders.created_at DESC
// LIMIT 20 OFFSET 0Transactions
Execute multiple operations atomically:
import {
TransactionManager,
TransactionConfig,
Logger
} from '@gl-life/gl-life-database';
const config: TransactionConfig = {
isolationLevel: 'READ_COMMITTED',
timeout: 30000,
retryAttempts: 3
};
const logger = new Logger({ level: 'info' });
const txManager = new TransactionManager(connection, config, logger);
await txManager.execute(async (tx) => {
await tx.execute('INSERT INTO accounts (balance) VALUES (?)', [1000]);
await tx.execute('UPDATE accounts SET balance = balance - ? WHERE id = ?', [100, 1]);
await tx.execute('UPDATE accounts SET balance = balance + ? WHERE id = ?', [100, 2]);
// Automatically commits on success, rolls back on error
});Tenant Isolation
Ensure data isolation in multi-tenant applications:
import {
TenantContext,
QueryWrapper,
Logger
} from '@gl-life/gl-life-database';
const logger = new Logger({ level: 'info' });
const tenantContext = new TenantContext(logger);
// Middleware to set tenant from request
function setTenantFromRequest(request: Request) {
const tenantId = request.headers.get('X-Tenant-ID');
if (!tenantId) {
throw new Error('Missing tenant ID');
}
tenantContext.setTenant({
tenantId,
isAdmin: false
});
}
// All queries automatically filtered by tenant
const wrapper = new QueryWrapper(tenantContext, logger);
const data = await wrapper.query('products', {});
// Automatically adds: WHERE tenant_id = ?
// Admin queries can bypass tenant filter
tenantContext.setTenant({
tenantId: 'admin',
isAdmin: true
});
const allData = await wrapper.query('products', {}, { bypassTenantFilter: true });Cache Invalidation Strategies
import {
MemoryCache,
CacheConfig,
InvalidationStrategy,
Logger
} from '@gl-life/gl-life-database';
const logger = new Logger({ level: 'info' });
// Time-based invalidation
const timeCache = new MemoryCache({
enabled: true,
backend: 'MEMORY',
maxItems: 1000,
defaultTTL: 300,
evictionPolicy: 'LRU',
invalidationStrategy: 'TIME_BASED'
}, logger);
// Event-based invalidation
const eventCache = new MemoryCache({
enabled: true,
backend: 'MEMORY',
maxItems: 1000,
defaultTTL: 0, // No TTL, manual invalidation
evictionPolicy: 'LRU',
invalidationStrategy: 'EVENT_BASED'
}, logger);
// Invalidate on data changes
await eventCache.set('user:123', userData);
// ... later after update
await eventCache.delete('user:123');
// or invalidate all users
await eventCache.invalidatePattern('user:*');API Overview
Core Classes
- ConnectionManager - Manages database connections and connection pooling
- TypeSafeQueryBuilder - Fluent API for building type-safe SQL queries
- TransactionManager - Handles database transactions with isolation levels
- TenantContext - Manages multi-tenant context and isolation
- QueryWrapper - Wraps queries with automatic tenant filtering
- MemoryCache - In-memory LRU cache for query results
- KVCache - Cloudflare Workers KV-backed cache
- MigrationPlanner - Plans and validates database migrations
- D1Adapter - Cloudflare D1 database adapter
- DurableObjectStorage - Durable Objects storage adapter
Type System
All operations return Result<T, E> types from @gl-life/gl-life-core for functional error handling:
import { Result, Option } from '@gl-life/gl-life-database';
const result: Result<User[], DatabaseError> = await queryUsers();
if (result.isOk()) {
const users = result.unwrap();
console.log('Found users:', users);
} else {
const error = result.unwrapErr();
console.error('Query failed:', error.message);
}Configuration
All components accept configuration objects with sensible defaults:
interface CacheConfig {
enabled: boolean;
backend: 'MEMORY' | 'KV' | 'HYBRID';
maxItems?: number;
defaultTTL?: number;
evictionPolicy?: 'LRU' | 'LFU' | 'FIFO';
invalidationStrategy?: 'TIME_BASED' | 'EVENT_BASED' | 'HYBRID';
}
interface TransactionConfig {
isolationLevel: 'READ_UNCOMMITTED' | 'READ_COMMITTED' | 'REPEATABLE_READ' | 'SERIALIZABLE';
timeout: number;
retryAttempts: number;
}
interface MigrationConfig {
enabled: boolean;
migrationTable: string;
autoRun: boolean;
}Security
This library implements comprehensive security measures:
- ✅ SQL Injection Protection - All queries use parameterized statements
- ✅ Tenant Isolation - Automatic tenant_id filtering prevents data leaks
- ✅ Type Safety - TypeScript compile-time checks + Zod runtime validation
- ✅ Audit Logging - All operations are logged for security auditing
- ✅ Zero Vulnerabilities - Regular security audits with
npm audit
See SECURITY.md for detailed security audit results and best practices.
Performance
Query building performance benchmarks (P95):
- Simple SELECT: <1ms
- Complex JOIN: <1ms
- GROUP BY with HAVING: <1ms
- Multiple WHERE conditions: <1ms
Cache performance benchmarks (P95):
- Cache GET (hit): <1ms
- Cache SET: <1ms
- Pattern invalidation (100 keys): <10ms
See PERFORMANCE.md for detailed benchmark results.
Testing
Run the test suite:
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Generate coverage report
npm run test:coverage
# Run performance benchmarks
npm run bench
# Run specific benchmark
npm run bench:queryCurrent test coverage: 95.9%
- Lines: 95.9%
- Functions: 96.2%
- Branches: 95.1%
- Statements: 95.9%
Development
# Install dependencies
npm install
# Build the package
npm run build
# Run tests
npm test
# Run benchmarks
npm run benchContributing
This package is part of the GoodLife Sargam ecosystem. For contribution guidelines, please visit gl.life.
Support
For questions, issues, or feature requests:
- Email: [email protected]
- Website: gl.life
Documentation
- API Documentation - Detailed API reference for AI agents
- Security Audit - Security audit report and best practices
- Performance Benchmarks - Performance baselines and optimization guide
License
Apache-2.0 © GoodLife
Part of the GoodLife Sargam ecosystem
For more information about the GoodLife Sargam ecosystem and our suite of 80 packages for building modern cloud applications, visit gl.life.
