npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

pgserve

v2.6.10

Published

Embedded PostgreSQL server with true concurrent connections - zero config, auto-provision databases

Readme

Quick Start

npx pgserve

Connect from any PostgreSQL client — databases auto-create on first connection:

psql postgresql://localhost:8432/myapp

Note: v2 default is the Unix socket — see Daemon mode. The TCP form above is the v1 compat path.

Naming. The npm package stays pgserve. The CLI now also ships as autopg — both bins route to the same dispatcher. Use autopg for the new console (autopg ui) and configuration surface (autopg config, autopg restart); pgserve <subcommand> keeps working as a forever alias. Settings live at ~/.autopg/settings.json and are migrated from ~/.pgserve/ automatically on first run. See Console and Configuration.

Features

Installation

# Canonical install — signed binary from GitHub Releases
curl -fsSL https://raw.githubusercontent.com/namastexlabs/pgserve/main/install.sh | bash

# Pinned version
PGSERVE_VERSION=v2.6.0 curl -fsSL .../install.sh | bash

install.sh fetches the signed tarball from GitHub Releases and verifies it via gh attestation verify (Sigstore Rekor public-good). Requires the gh CLI. pgserve no longer depends on npm — the install + upgrade path is binary tarballs all the way down.

Windows

Download pgserve-windows-x64.exe from GitHub Releases.

Double-click to run, or use CLI:

pgserve-windows-x64.exe --port 5432
pgserve-windows-x64.exe --data C:\pgserve-data

CLI Reference

autopg and pgserve are interchangeable — every subcommand routes through the same dispatcher. Use whichever you prefer; new examples in this README and in console/ use autopg.

autopg [options]                       # foreground server (alias: pgserve)
autopg daemon                          # long-lived background daemon
autopg install [--port N] [--data P]   # register pgserve under pm2
autopg uninstall                       # remove from pm2 (data dir kept)
autopg status                          # pm2 + on-disk config snapshot
autopg url | autopg port               # canonical connection string / port
autopg config <list|get|set|edit|path|init>   # manage ~/.autopg/settings.json
autopg restart                         # pm2-aware: pm2 restart pgserve, else SIGTERM+respawn
autopg ui [--port N] [--no-open]       # local web console on 127.0.0.1

v2.6 cohort verbs

The 2.6 release adds five operator-facing verbs for health probing, trust-store management, orphan-database GC, fingerprint provisioning, and per-consumer app registration:

pgserve doctor [--json]                # read-only health probe
pgserve trust <list|add|remove> [...]  # manage ~/.pgserve/trust/identities.json
pgserve gc [--dry-run|--apply]         # sweep orphan databases (audit log)
pgserve provision <fingerprint>        # idempotent DB + role provisioning
pgserve create-app <slug>              # per-consumer manifest LOCK 1
pgserve verify [--slug <slug>] <bin>   # cosign verify against trust list or locked roots

Full reference: docs/migrations/v2.6-from-v2.5.md · docs/pgserve-meta.md · docs/trust-store.md.

Foreground options accepted by autopg / pgserve (no subcommand):

Options:
  --port <number>       PostgreSQL port (default: 8432)
  --data <path>         Data directory for persistence (default: in-memory)
  --ram                 Use RAM storage via /dev/shm (Linux only, fastest)
  --host <host>         Host to bind to (default: 127.0.0.1)
  --log <level>         Log level: error, warn, info, debug (default: info)
  --cluster             Force cluster mode (auto-enabled on multi-core)
  --no-cluster          Force single-process mode
  --workers <n>         Number of worker processes (default: CPU cores)
  --no-provision        Disable auto-provisioning of databases
  --sync-to <url>       Sync to real PostgreSQL (async replication)
  --sync-databases <p>  Database patterns to sync (comma-separated)
  --pgvector            Auto-enable pgvector extension on new databases
  --max-connections <n> Max concurrent connections (default: 1000)
  --help                Show help message
# Development (memory mode, auto-clusters on multi-core)
pgserve

# RAM mode (Linux only, 2x faster)
pgserve --ram

# Persistent storage
pgserve --data /var/lib/pgserve

# Custom port
pgserve --port 5433

# Enable pgvector for AI/RAG applications
pgserve --pgvector

# RAM mode + pgvector (fastest for AI workloads)
pgserve --ram --pgvector

# Sync to production PostgreSQL
pgserve --sync-to "postgresql://user:[email protected]:5432/prod"

Daemon mode

pgserve@2 ships a singleton daemon that binds a Unix control socket inside $XDG_RUNTIME_DIR/pgserve (fallback /tmp/pgserve). One daemon per host serves every consumer on the box — no port conflicts, no credentials, kernel-rooted identity. Run it under PM2 or systemd so it restarts automatically.

# Foreground (for debugging)
pgserve daemon

# Stop a running daemon
pgserve daemon stop

A second pgserve daemon invocation while the first is running exits with already running, pid N. A daemon killed with kill -9 leaves an orphan PID file + socket; the next pgserve daemon boot detects the dead pid and cleans both up automatically.

Connect from any libpq client (no host/port/user/password required — the daemon authenticates via SO_PEERCRED on accept):

psql -h "${XDG_RUNTIME_DIR:-/tmp}/pgserve" -d myapp
# or via connection URI
psql "postgresql:///myapp?host=${XDG_RUNTIME_DIR:-/tmp}/pgserve"

Supervised by PM2 — pgserve install (recommended)

pgserve install registers pgserve as a hardened pm2 process in one command. Idempotent: re-running it is a no-op when already installed.

pgserve install                    # one-shot register + start under pm2
pgserve install --port 8442        # custom port
pgserve install --data /data/pg    # custom data dir

pgserve url                        # postgres://localhost:8432/postgres
pgserve port                       # 8432
pgserve status                     # pm2 + on-disk config snapshot
pgserve uninstall                  # remove from pm2; keep data dir

Hardened defaults (tuned for production-grade Postgres workloads, not toy-machine values):

| Flag | Default | Why | |------|---------|-----| | --max-memory-restart | 4G | Postgres realistic working set: shared_buffers + autovacuum + connection backends. 1G OOM-kills under modest load. Override with PGSERVE_MAX_MEMORY=8G pgserve install. | | --max-restarts | 50 | Tolerates extended outages (NATS reconnect storms, host pressure). Combined with --min-uptime, only RAPID failures count. | | --min-uptime | 10000 ms | Restart counts against the cap only when the process crashed within 10s of starting. Healthy long-uptime crashes don't burn the budget. | | --restart-delay | 4000 ms | Initial gap between restarts. | | --exp-backoff-restart-delay | 100 → ~60000 ms | Exponential spread on repeated failures so we don't hammer pm2 + the host on persistent issues. | | --kill-timeout | 60000 ms | Postgres needs time to flush WAL on graceful shutdown; 60s headroom. | | --log-date-format | YYYY-MM-DD HH:mm:ss.SSS | Operator-friendly timestamps in pm2 logs. | | --output / --error | ~/.pgserve/logs/pgserve-{out,error}.log | Rotates via pm2-logrotate (install separately). |

Config: ~/.pgserve/config.json (override the directory with PGSERVE_CONFIG_DIR). Memory ceiling: env-tunable via PGSERVE_MAX_MEMORY at install time.

Downstream services that need a Postgres connection can shell out to pgserve install (no-op if already running) and read the canonical URL from pgserve url instead of spinning up their own embedded pgserve.

Manual ecosystem.config.cjs (legacy)

module.exports = {
  apps: [{
    name: 'pgserve',
    script: 'pgserve',
    args: 'daemon',
    autorestart: true,
    max_memory_restart: '1G',
    env: { XDG_RUNTIME_DIR: '/run/user/1000' },
  }],
};
pm2 start ecosystem.config.cjs && pm2 save

Supervised by systemd

/etc/systemd/user/pgserve.service:

[Unit]
Description=pgserve daemon
After=default.target

[Service]
Type=simple
ExecStart=/usr/bin/env npx pgserve daemon
Restart=on-failure
RestartSec=5

[Install]
WantedBy=default.target

Enable for the current user:

systemctl --user enable --now pgserve
journalctl --user -u pgserve -f

The systemd user unit inherits XDG_RUNTIME_DIR automatically; the daemon binds ${XDG_RUNTIME_DIR}/pgserve/control.sock (mode 0600, dir mode 0700) plus a .s.PGSQL.5432 symlink so off-the-shelf PostgreSQL clients connect without further configuration.

Fingerprint isolation

Each consumer is identified by a kernel-rooted fingerprint derived from the peer's SO_PEERCRED plus the resolved package.json name, collapsed to 12 hex chars. The daemon auto-creates one database per fingerprint — app_<sanitized-name>_<12hex> — and refuses to route a peer into any other database with SQLSTATE 28P01 invalid_authorization — database fingerprint mismatch.

# What `psql -l` shows on a host with three consumers:
$ psql -h "${XDG_RUNTIME_DIR:-/tmp}/pgserve" -l
        Name           |  Owner   | ...
-----------------------+----------+----
 app_genie_a1b2c3d4e5f6 | postgres | ...
 app_brain_4f3e2d1c0b9a | postgres | ...
 app_omni_9876543210ab  | postgres | ...

Monorepo rule: the root package.json name wins. Every workspace under it shares one fingerprint and one database — sub-packages do not get their own. If you need separate isolation, run them from separate checkouts.

Sanitization: non-[a-z0-9] runs collapse to _, lowercased, truncated to 30 chars so the final DB name stays within PostgreSQL's 63-char limit. A name like @scope/foo bar becomes _scope_foo_bar.

Emergency kill switch: PGSERVE_DISABLE_FINGERPRINT_ENFORCEMENT=1 disables enforcement for the daemon process. Use it as a debugging tool only — every bypassed connection emits an enforcement_kill_switch_used audit event and the daemon logs a deprecation warning at boot.

Long-running apps: pgserve.persist

Default lifecycle is ephemeral: a database whose liveness_pid is dead AND whose last_connection_at is older than 24h is dropped on the next GC sweep (boot, hourly, sampled on-connect). Reaped DBs emit db_reaped_ttl or db_reaped_liveness audit events.

If your app holds state worth keeping past 24h of idle — genie's wish/agent store, internal dashboards, anything you'd be unhappy to lose — declare persistence in package.json:

{
  "name": "my-long-lived-app",
  "pgserve": { "persist": true }
}

Persisted databases are never reaped, regardless of liveness or TTL. Dev workloads with long debug cycles do not normally need this — any new connection slides the TTL window forward. Reach for pgserve.persist when the app is genuinely long-lived (production daemon, dashboard, durable agent state), not just for convenience.

Console (autopg ui)

A local web console for inspecting and editing the running cluster. Runs in-process via node:http, binds 127.0.0.1 only, single-user dev tool — no auth, no TLS, never expose it.

autopg ui                  # walk 8433–8533 picking the first free port
autopg ui --port 8500      # bind exactly 8500
autopg ui --no-open        # skip browser launch (CI / headless)

The first stateful screen — Settings — is functional today: it renders the 6-section schema (server / runtime / sync / supervision / postgres / ui), validates inline, and round-trips through ~/.autopg/settings.json with optimistic concurrency (sha256 etag + If-Match). The other 10 screens (Databases, Tables, SQL, Optimizer, Security, Ingress, Health, Sync, RLM-trace, RLM-sim) are scaffolded as [ coming soon ] placeholders — Health ships next.

The UI shells out to the CLI for every mutation (autopg config set under PUT, autopg restart under POST). The daemon stays untouched — no HTTP API, no signal-based reload — so the console works even when no daemon is running.

See console/README.md for the local dev loop and design-system source.

Configuration

The CLI is the source of truth. Settings live at ~/.autopg/settings.json (override the directory with AUTOPG_CONFIG_DIR; the legacy PGSERVE_CONFIG_DIR is still honored and falls back to ~/.pgserve/). Every write is atomic, chmod 0600, and tagged with a sha256 etag for optimistic concurrency on the UI helper's PUT path.

Schema sections (one per ~/.autopg/settings.json top-level key):

| Section | Purpose | |---------|---------| | server | Router port/host, backend socket, superuser credentials | | runtime | Log level, auto-provision, pgvector, data dir | | sync | WAL-based logical replication toggle | | supervision | pm2 hardening defaults (memory, restart, kill timeout) | | postgres | 15 curated GUCs (shared_buffers, wal_level, …) + _extra raw passthrough | | ui | Console theme / phosphor / density / CRT toggle |

autopg config init                              # write defaults
autopg config list                              # KEY VALUE SOURCE table
autopg config get postgres.shared_buffers       # machine-friendly value
autopg config set postgres.shared_buffers 256MB # validates + atomic write
autopg config edit                              # opens $EDITOR on settings.json
autopg config path                              # absolute path (honors AUTOPG_CONFIG_DIR)

Precedence: default < file < env. AUTOPG_* env vars beat PGSERVE_* (the legacy form is still honored with a one-time deprecation log per process, so existing operators keep working). The console shows a yellow OVERRIDDEN BY ENV chip on rows whose env var is currently set.

GUC passthrough: postgres._extra is a free-form { gucName: scalar } map for any PostgreSQL setting outside the curated 15. Names must match ^[a-z][a-z0-9_]*$; values must be string / number / boolean (no newlines, no leading -). Both layers are revalidated at boot, so a typo logs a logger.warn and is dropped — postgres still starts.

One-shot migration: on first run, if ~/.pgserve/ exists and ~/.autopg/ does not, the contents are copied (preserving mtimes) and a MIGRATED-FROM-PGSERVE.md marker is dropped in the old dir. Idempotent — second run is a no-op.

Full schema reference: docs/settings-schema.md.

Compat TCP via --listen

TCP is off by default in v2. Bring it back only when you need it (Kubernetes pods, remote sync, legacy clients that cannot speak Unix sockets) by opting in:

pgserve daemon --listen :5432
# Repeatable for multiple binds:
pgserve daemon --listen :5432 --listen 0.0.0.0:5433

TCP peers cannot use SO_PEERCRED, so they must authenticate at connect time. Issue a bearer token bound to a known fingerprint:

# Prints the token ONCE; the daemon stores only its hash.
pgserve daemon issue-token --fingerprint a1b2c3d4e5f6

# TCP client passes it via libpq application_name:
#   ?fingerprint=a1b2c3d4e5f6&token=<bearer>

# Revoke when done:
pgserve daemon revoke-token <token-id>

Audit events: tcp_token_issued, tcp_token_used, tcp_token_denied. Tokens are verified with constant-time compare. Without a valid token a TCP connection is refused — there is no anonymous TCP path.

Verify no port is bound when --listen is not set:

ss -tlnp | grep pgserve   # no rows expected

API

Daemon-first apps can let the first caller install/start the singleton and then connect through the Unix socket. The daemon derives the app identity from kernel peer credentials and routes it to that app's signed fingerprint database.

import { daemonClientOptions, ensureDaemon } from 'pgserve';
import postgres from 'postgres';

await ensureDaemon({
  dataDir: `${process.env.HOME}/.pgserve/data`,
  logLevel: 'warn',
});

const sql = postgres(daemonClientOptions());
await sql`SELECT current_database()`;

The classic TCP router API remains available for explicit v1-compatible embedded servers:

import { startMultiTenantServer } from 'pgserve';

const server = await startMultiTenantServer({
  port: 8432,
  host: '127.0.0.1',
  baseDir: null,        // null = memory mode
  logLevel: 'info',
  autoProvision: true,
  enablePgvector: true, // Auto-enable pgvector on new databases
  syncTo: null,         // Optional: PostgreSQL URL for replication
  syncDatabases: null   // Optional: patterns like "myapp,tenant_*"
});

// Get stats
console.log(server.getStats());

// Graceful shutdown
await server.stop();

Framework Integration

import pg from 'pg';

const client = new pg.Client({
  connectionString: 'postgresql://localhost:8432/myapp'
});

await client.connect();
await client.query('CREATE TABLE users (id SERIAL, name TEXT)');
await client.query("INSERT INTO users (name) VALUES ('Alice')");
const result = await client.query('SELECT * FROM users');
console.log(result.rows);
await client.end();
// prisma/schema.prisma
datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}
# .env
DATABASE_URL="postgresql://localhost:8432/myapp"

# Run migrations
npx prisma migrate dev
import { drizzle } from 'drizzle-orm/node-postgres';
import { Pool } from 'pg';

const pool = new Pool({
  connectionString: 'postgresql://localhost:8432/myapp'
});

const db = drizzle(pool);
const users = await db.select().from(usersTable);

Async Replication

Sync ephemeral pgserve data to a real PostgreSQL database. Uses native logical replication for zero performance impact on the hot path.

# Sync all databases
pgserve --sync-to "postgresql://user:[email protected]:5432/mydb"

# Sync specific databases (supports wildcards)
pgserve --sync-to "postgresql://..." --sync-databases "myapp,tenant_*"

Replication is handled by PostgreSQL's WAL writer process, completely off the runtime event loop. Sync failures don't affect main server operation.

pgvector (Vector Search)

pgvector is built-in — no separate installation required. Just enable it:

# Auto-enable pgvector on all new databases
pgserve --pgvector

# Combined with RAM mode for fastest vector operations
pgserve --ram --pgvector

When --pgvector is enabled, every new database automatically has the vector extension installed. No SQL setup required.

-- Create table with vector column (1536 = OpenAI embedding size)
CREATE TABLE documents (id SERIAL, content TEXT, embedding vector(1536));

-- Insert with embedding
INSERT INTO documents (content, embedding) VALUES ('Hello', '[0.1, 0.2, ...]');

-- k-NN similarity search (L2 distance)
SELECT content FROM documents ORDER BY embedding <-> $1 LIMIT 10;

See pgvector documentation for full API reference.

If you don't use --pgvector, you can still enable pgvector manually per database:

CREATE EXTENSION IF NOT EXISTS vector;

pgvector 0.8.1 is bundled with the PostgreSQL binaries. Supports L2 distance (<->), inner product (<#>), and cosine distance (<=>).

Performance

CRUD Benchmarks

Vector Benchmarks (pgvector)

Why pgserve wins on writes: RAM mode uses /dev/shm (tmpfs), eliminating fsync latency. Vector search is CPU-bound, so RAM mode shows minimal benefit there.

Final Score

Methodology: Recall@k measured against brute-force ground truth (industry standard). PostgreSQL baseline is Docker pgvector/pgvector:pg18. RAM mode available on Linux and WSL2.

Run benchmarks yourself: bun tests/benchmarks/runner.js --include-vector

Use Cases

Requirements

  • Runtime: Node.js >= 18 (npm/npx)
  • Platform: Linux x64, macOS ARM64/x64, Windows x64

Development

Contributors: This project uses Bun internally for development:

# Install dependencies
bun install

# Run tests
bun test

# Run benchmarks
bun tests/benchmarks/runner.js

# Lint
bun run lint

Contributing

Contributions welcome! Fork the repo, create a feature branch, add tests, and submit a PR.