pgserve
v1.1.5
Published
Embedded PostgreSQL server with true concurrent connections - zero config, auto-provision databases
Maintainers
Readme
Quick Start
npx pgserveConnect from any PostgreSQL client — databases auto-create on first connection:
psql postgresql://localhost:8432/myappFeatures
Installation
# Zero install (recommended)
npx pgserve
# Global install
npm install -g pgserve
# Project dependency
npm install pgservePostgreSQL binaries are automatically downloaded on first run (~100MB).
Windows
Download pgserve-windows-x64.exe from GitHub Releases.
Double-click to run, or use CLI:
pgserve-windows-x64.exe --port 5432
pgserve-windows-x64.exe --data C:\pgserve-dataCLI Reference
pgserve [options]
Options:
--port <number> PostgreSQL port (default: 8432)
--data <path> Data directory for persistence (default: in-memory)
--ram Use RAM storage via /dev/shm (Linux only, fastest)
--host <host> Host to bind to (default: 127.0.0.1)
--log <level> Log level: error, warn, info, debug (default: info)
--cluster Force cluster mode (auto-enabled on multi-core)
--no-cluster Force single-process mode
--workers <n> Number of worker processes (default: CPU cores)
--no-provision Disable auto-provisioning of databases
--sync-to <url> Sync to real PostgreSQL (async replication)
--sync-databases <p> Database patterns to sync (comma-separated)
--pgvector Auto-enable pgvector extension on new databases
--max-connections <n> Max concurrent connections (default: 1000)
--help Show help message# Development (memory mode, auto-clusters on multi-core)
pgserve
# RAM mode (Linux only, 2x faster)
pgserve --ram
# Persistent storage
pgserve --data /var/lib/pgserve
# Custom port
pgserve --port 5433
# Enable pgvector for AI/RAG applications
pgserve --pgvector
# RAM mode + pgvector (fastest for AI workloads)
pgserve --ram --pgvector
# Sync to production PostgreSQL
pgserve --sync-to "postgresql://user:[email protected]:5432/prod"API
import { startMultiTenantServer } from 'pgserve';
const server = await startMultiTenantServer({
port: 8432,
host: '127.0.0.1',
baseDir: null, // null = memory mode
logLevel: 'info',
autoProvision: true,
enablePgvector: true, // Auto-enable pgvector on new databases
syncTo: null, // Optional: PostgreSQL URL for replication
syncDatabases: null // Optional: patterns like "myapp,tenant_*"
});
// Get stats
console.log(server.getStats());
// Graceful shutdown
await server.stop();Framework Integration
import pg from 'pg';
const client = new pg.Client({
connectionString: 'postgresql://localhost:8432/myapp'
});
await client.connect();
await client.query('CREATE TABLE users (id SERIAL, name TEXT)');
await client.query("INSERT INTO users (name) VALUES ('Alice')");
const result = await client.query('SELECT * FROM users');
console.log(result.rows);
await client.end();// prisma/schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}# .env
DATABASE_URL="postgresql://localhost:8432/myapp"
# Run migrations
npx prisma migrate devimport { drizzle } from 'drizzle-orm/node-postgres';
import { Pool } from 'pg';
const pool = new Pool({
connectionString: 'postgresql://localhost:8432/myapp'
});
const db = drizzle(pool);
const users = await db.select().from(usersTable);Async Replication
Sync ephemeral pgserve data to a real PostgreSQL database. Uses native logical replication for zero performance impact on the hot path.
# Sync all databases
pgserve --sync-to "postgresql://user:[email protected]:5432/mydb"
# Sync specific databases (supports wildcards)
pgserve --sync-to "postgresql://..." --sync-databases "myapp,tenant_*"Replication is handled by PostgreSQL's WAL writer process, completely off the runtime event loop. Sync failures don't affect main server operation.
pgvector (Vector Search)
pgvector is built-in — no separate installation required. Just enable it:
# Auto-enable pgvector on all new databases
pgserve --pgvector
# Combined with RAM mode for fastest vector operations
pgserve --ram --pgvectorWhen --pgvector is enabled, every new database automatically has the vector extension installed. No SQL setup required.
-- Create table with vector column (1536 = OpenAI embedding size)
CREATE TABLE documents (id SERIAL, content TEXT, embedding vector(1536));
-- Insert with embedding
INSERT INTO documents (content, embedding) VALUES ('Hello', '[0.1, 0.2, ...]');
-- k-NN similarity search (L2 distance)
SELECT content FROM documents ORDER BY embedding <-> $1 LIMIT 10;See pgvector documentation for full API reference.
If you don't use --pgvector, you can still enable pgvector manually per database:
CREATE EXTENSION IF NOT EXISTS vector;pgvector 0.8.1 is bundled with the PostgreSQL binaries. Supports L2 distance (
<->), inner product (<#>), and cosine distance (<=>).
Performance
CRUD Benchmarks
Vector Benchmarks (pgvector)
Why pgserve wins on writes: RAM mode uses /dev/shm (tmpfs), eliminating fsync latency. Vector search is CPU-bound, so RAM mode shows minimal benefit there.
Final Score
Methodology: Recall@k measured against brute-force ground truth (industry standard). PostgreSQL baseline is Docker pgvector/pgvector:pg17. RAM mode available on Linux and WSL2.
Run benchmarks yourself: bun tests/benchmarks/runner.js --include-vector
Use Cases
Requirements
- Runtime: Node.js >= 18 (npm/npx)
- Platform: Linux x64, macOS ARM64/x64, Windows x64
Development
Contributors: This project uses Bun internally for development:
# Install dependencies
bun install
# Run tests
bun test
# Run benchmarks
bun tests/benchmarks/runner.js
# Lint
bun run lintContributing
Contributions welcome! Fork the repo, create a feature branch, add tests, and submit a PR.
