npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@yawlabs/postgres-mcp

v0.4.0

Published

PostgreSQL MCP server — query, schema introspection, explain, and health checks for AI assistants

Readme

@yawlabs/postgres-mcp

npm version License: MIT

Query a PostgreSQL database from Claude Code, Cursor, and any MCP client. Read-only by default — writes opt in via a single env var — so an agent can't silently drop your tables.

Built and maintained by Yaw Labs.

Backstory

Anthropic's reference Postgres MCP server, @modelcontextprotocol/server-postgres, was archived in May 2025 and marked deprecated on npm in July 2025. Anthropic has not shipped a replacement. Despite the deprecation, the last published version (v0.6.2) is still pulled ~20,000 times per week — a lot of agents are pointed at an unmaintained package.

That unmaintained package also has a known, publicly documented stacked-query SQL injection (Datadog Security Labs) that bypasses its BEGIN READ ONLY wrapper with input like COMMIT; DROP SCHEMA public CASCADE;. It has never been patched at npm.

A handful of community forks have appeared, but each fills a narrow slice:

  • @zeddotdev/postgres-context-server — Zed's fork, primarily a security patch on the original shape.
  • Postgres MCP Pro (Crystal DBA) — focused on index tuning and hypothetical-index / buffer-cache diagnostics.
  • AWS Labs Postgres MCP — tied to Aurora / RDS Data API + Secrets Manager.

None of them position themselves as a general-purpose daily driver you'd hand to Claude Code or Cursor against an arbitrary Postgres: modern introspection, perf helpers, role/privilege awareness, and a write-safety posture out of the box. That's the gap @yawlabs/postgres-mcp fills.

Why this one?

  • Read-only by default — user SQL runs in a BEGIN READ ONLY transaction, so postgres itself (not string parsing) blocks writes. Opt in with ALLOW_WRITES=1.
  • Extended query protocol for all user SQLpg_query sends user input with queryMode: 'extended', which restricts each request to a single statement. This closes the stacked-query injection class (COMMIT; DROP SCHEMA x CASCADE;) that defeated the reference server's BEGIN READ ONLY wrapper. Integration test asserts the rejection.
  • Parameterized queriespg_query takes a params array for $1, $2, etc. No string-interpolated SQL in our code path.
  • Written from scratch, actively maintained — not a fork of the deprecated code. Unit + integration tests (npm test, npm run test:integration) run against a real Postgres; releases cut via release.sh.
  • Schema introspection built inpg_list_schemas, pg_list_tables, pg_describe_table return columns, primary keys, foreign keys, and indexes without the agent having to remember pg_catalog joins.
  • EXPLAIN as a first-class tool — text or JSON format, with optional ANALYZE. ANALYZE for non-SELECT statements requires ALLOW_WRITES=1 and always rolls back, so the plan is real but the write doesn't persist.
  • Perf diagnostics the deprecated server never hadpg_top_queries (from pg_stat_statements), pg_seq_scan_tables, pg_unused_indexes, pg_table_bloat, pg_inspect_locks, pg_replication_status. Answer "why is this slow?" in one tool call.
  • Health snapshotpg_health returns version, db size, connection counts, and the 10 longest-running active queries in one call.
  • Role and privilege awarenesspg_list_roles and pg_table_privileges for the common "who can touch what?" questions.
  • Instant startup — ships as a single bundled file with zero runtime dependencies. No multi-minute node_modules install on every npx cold start.
  • Result truncation — large result sets are capped at POSTGRES_MAX_ROWS (default 1000) with a truncated: true flag, so a stray SELECT * FROM events doesn't blow out the model context.

Quick start

1. Create .mcp.json in your project root

macOS / Linux / WSL:

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@yawlabs/postgres-mcp"],
      "env": {
        "DATABASE_URL": "postgres://user:password@host:5432/dbname"
      }
    }
  }
}

Windows:

{
  "mcpServers": {
    "postgres": {
      "command": "cmd",
      "args": ["/c", "npx", "-y", "@yawlabs/postgres-mcp"],
      "env": {
        "DATABASE_URL": "postgres://user:password@host:5432/dbname"
      }
    }
  }
}

Why the extra step on Windows? Since Node 20, child_process.spawn cannot directly execute .cmd files (that's what npx is on Windows). Wrapping with cmd /c is the standard workaround.

2. Restart and approve

Restart Claude Code (or your MCP client) and approve the postgres MCP server when prompted.

3. (Optional) Enable writes

Read-only is the default. If you want the agent to be able to INSERT, UPDATE, DELETE, or run DDL, add ALLOW_WRITES=1 to the env block:

"env": {
  "DATABASE_URL": "postgres://...",
  "ALLOW_WRITES": "1"
}

Prefer scoping this to dev/test databases — for production, leave writes off and use migration tools out-of-band.

What can an agent do with this?

Once connected, the agent picks tools automatically based on what you ask. A few single-tool examples:

  • "Describe the users table" -> pg_describe_table -> returns kind, columns, PK, FKs, indexes.
  • "Which tables have a user_id column?" -> pg_search_columns with pattern user_id -> one call instead of iterating every table.
  • "This query is slow, why?" -> pg_explain with analyze: true -> returns the plan with actual row counts and timing.
  • "What's the slowest query we run?" -> pg_top_queries -> returns the top N from pg_stat_statements with mean/total/min/max times.
  • "Do we have any unused indexes?" -> pg_unused_indexes -> returns non-unique, non-primary indexes with zero or low scan counts + their size.
  • "Is pgvector installed?" -> pg_list_extensions -> yes/no with version.

The bigger leverage is multi-tool reasoning. A few real workflows:

  • Unstick a hung app. pg_inspect_locks returns blocked PID + blocking PID + the offending query, then pg_kill (ALLOW_WRITES=1 required) cancels the blocker. The agent can run both in one turn — it's the fastest path from "the app is frozen" to "back up."
  • Chase a slow page. pg_top_queries ranks the worst queries, pg_explain with analyze: true shows the plan for the top hit, pg_seq_scan_tables and pg_unused_indexes say whether the answer is "add an index here" or "drop a dead one there."
  • Oncall triage. pg_health checks connectivity + active-query count + database size; pg_inspect_locks and pg_replication_status confirm whether contention or replication lag is in play before paging the on-call DBA.

Tools

| Tool | Description | |------|-------------| | pg_query | Run a SQL query. Read-only by default; writes require ALLOW_WRITES=1. Supports parameterized queries via params. Result fields include dataTypeName (e.g. int4, jsonb) alongside dataTypeID. | | pg_list_schemas | List non-system schemas. | | pg_list_tables | List tables (and optionally views) in a schema with estimated row counts. Paginated via limit/offset. | | pg_describe_table | Kind, columns, PK, outgoing FKs, incoming FKs (referenced_by), CHECK / UNIQUE / EXCLUDE constraints, indexes, and partition parent/children for a relation. | | pg_list_views | List views and materialized views in a schema, including their SQL definitions. | | pg_list_functions | List functions, procedures, and aggregates in a schema with signatures and return types. | | pg_list_extensions | List installed extensions (pgvector, postgis, pg_stat_statements, etc.) with versions. | | pg_search_columns | Find columns by name pattern across all user schemas. Case-insensitive, supports SQL LIKE wildcards. | | pg_explain | EXPLAIN or EXPLAIN ANALYZE for a SQL statement. Text or JSON output. Optional hypothetical_indexes (requires the HypoPG extension) lets you ask "what would the plan be with these indexes?" without creating them on disk. | | pg_health | Server version, database size, connection count, active queries, table count. | | pg_top_queries | Top N queries by total/mean execution time. Requires the pg_stat_statements extension. | | pg_seq_scan_tables | Tables with heavy sequential scans — missing-index candidates. | | pg_unused_indexes | Non-unique, non-primary indexes with low scan counts — drop candidates. | | pg_inspect_locks | Who is blocking whom right now (blocked PID, blocker PID, lock type, queries). | | pg_list_roles | Database roles with login/superuser/createdb flags and group memberships. | | pg_table_privileges | Who has SELECT/INSERT/UPDATE/DELETE/etc. on a table or whole schema. | | pg_table_bloat | Tables with high dead-tuple ratios — VACUUM candidates. | | pg_replication_status | Replication slots, connected replicas, and current WAL position. | | pg_advisor | Rolled-up DBA lints in one call: sequence-exhaustion candidates, tables without a primary key, and (configurable) public tables with RLS disabled. The "what should I be looking at?" starting point. | | pg_kill | Cancel a running query or terminate a backend connection. Requires ALLOW_WRITES=1. |

Configuration

All env vars are read from the MCP server's environment:

| Variable | Default | Purpose | |----------|---------|---------| | DATABASE_URL | (required) | PostgreSQL connection string. | | ALLOW_WRITES | unset | Set to 1 or true to allow DML/DDL via pg_query and pg_explain ANALYZE of writes. | | POSTGRES_STATEMENT_TIMEOUT_MS | 30000 | Per-statement timeout. | | POSTGRES_CONNECTION_TIMEOUT_MS | 10000 | TCP connect timeout. Without this, a dead host hangs until the OS gives up (~2 minutes). | | POSTGRES_MAX_ROWS | 1000 | Cap on rows returned by pg_query. | | POSTGRES_POOL_MAX | 5 | Max pool connections. Set to 1 for single-threaded backends (pglite-socket, PgBouncer transaction mode). | | POSTGRES_SSL_REJECT_UNAUTHORIZED | unset | Set to false to skip TLS cert verification (for managed DBs using private-CA certs). Connection is still encrypted. |

Supported Postgres versions

Tested on PostgreSQL 17 and 18 in CI. Should work on PG13+ -- a few tools (pg_replication_status reading wal_status, pg_top_queries reading *_exec_time) rely on columns that landed in PG13. PG12 and below are out of upstream support and not exercised here.

Connecting to managed Postgres (Supabase, Neon, RDS, etc.)

Most managed databases require TLS but serve certs signed by a private CA that Node's default trust store doesn't recognize. The symptom is one of:

  • self signed certificate in certificate chain
  • unable to get local issuer certificate
  • unable to verify the first certificate

To allow the connection while keeping traffic encrypted, add POSTGRES_SSL_REJECT_UNAUTHORIZED=false to the env block:

"env": {
  "DATABASE_URL": "postgres://user:pass@host:5432/db?sslmode=require",
  "POSTGRES_SSL_REJECT_UNAUTHORIZED": "false"
}

This disables certificate chain verification only -- the TCP connection is still TLS-encrypted end-to-end. For production setups where you can install the CA, prefer putting the cert in the Node trust store (NODE_EXTRA_CA_CERTS) over disabling verification globally.

Troubleshooting

DATABASE_URL is not set — Your MCP client is launching the server without the env var. On Windows especially, env vars set in bash / PowerShell profiles are not inherited by MCP servers launched via cmd. Put DATABASE_URL directly in the env block of .mcp.json.

password authentication failed — Check the username, password, and that the user has CONNECT privilege on the database. URL-encode special characters in the password (@%40, #%23, /%2F).

SASL: SCRAM-SERVER-FIRST-MESSAGE: client password must be a string — The password in your connection string is empty or became null after URL decoding. Re-check your connection string.

canceling statement due to statement timeout — A single query exceeded POSTGRES_STATEMENT_TIMEOUT_MS (default 30s). Increase it, narrow the query with WHERE, or add an index. This is working as designed -- the timeout exists so a runaway query cannot hang the agent.

Write blocked: this server is in read-only mode — You asked the agent to write but ALLOW_WRITES is not set. Add ALLOW_WRITES=1 to the env block of .mcp.json and restart your MCP client. Only do this for dev/test DBs.

Connection pool exhaustion with PgBouncer transaction mode or pglite-socket — These backends don't support concurrent queries on a single connection. Set POSTGRES_POOL_MAX=1 in the env block.

First query is slow, subsequent queries are fast — Expected. The pg driver lazily establishes the first connection; subsequent queries reuse the pool.

Development

Run the full suite (unit + integration) against a real Postgres:

DATABASE_URL='postgres://user:pass@host:5432/db' POSTGRES_MCP_INTEGRATION=1 npm run test:integration

The integration suite assumes a disposable database -- it creates and drops a test_fixture schema. Don't point it at anything you care about.

Windows: integration tests via WSL2

Native Postgres on Windows ARM64 is fragile (UCRT runtime gaps, missing ARM64 builds). The reliable path is a disposable Ubuntu under WSL2 with the integration suite running inside WSL (WSL2's NAT blocks the Windows host from reaching :5432, so don't try to run the tests from PowerShell):

wsl --install -d Ubuntu --no-launch
# reboot, then:
wsl -d Ubuntu -u root bash -c "apt-get update && apt-get install -y nodejs npm rsync"
wsl -d Ubuntu -u root bash /mnt/c/path/to/postgres-mcp/scripts/wsl-pg-setup.sh
wsl -d Ubuntu -u root bash /mnt/c/path/to/postgres-mcp/scripts/wsl-test-matrix.sh

wsl-pg-setup.sh installs PG17 and PG18 from the PGDG apt repo on ports 5432 and 5433, sets the postgres password to postgres, and creates postgres_mcp_test in each. wsl-test-matrix.sh rsyncs the working tree into /root/postgres-mcp, runs npm ci once, and runs the integration suite against every cluster found via pg_lsclusters.

Tear down when finished: wsl --unregister Ubuntu.

License

MIT © 2026 YawLabs