npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@exiahuang/pg1

v0.1.3

Published

PostgreSQL management CLI

Downloads

370

Readme

pg1

PostgreSQL management CLI .

Author: exiahuang

Requirements

  • psql / pg_dump (same version as the server)
  • python3 + pandas + openpyxl (for Excel output)
  • .env file (see below)

Optional Dependencies

For diff-csv comparison modes:

  • DuckDB: pip install duckdb (for --duckdb mode)
  • SQLite: built-in with Python (for --sqlite mode)

Install pg_dump / psql v17 (Ubuntu/Debian)

curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/postgresql.gpg
echo "deb https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" | sudo tee /etc/apt/sources.list.d/pgdg.list
sudo apt-get update && sudo apt-get install -y postgresql-client-17

.env Configuration

Uses standard PostgreSQL environment variables.

PGHOST="127.0.0.1"
PGPORT="5432"
PGDATABASE="dbname"
PGUSER="user"
PGPASSWORD="pass"

#PGSSLMODE=""
#PGSSLCERT=""
#PGSSLKEY=""
#PGSSLROOTCERT=""

# Optional: override pg client binary paths
#PSQL="psql"
#PG_DUMP="pg_dump"

Output

All commands write output to {workdir}/output/ (default: ./output/).


Commands

backup

Backup specified tables with pg_dump. Backs up all tables if none are specified.

./pg1 backup [tables...] [-w dir]

Output: output/tables_backup.sql

./pg1 backup
./pg1 backup public.users public.orders

export-counts

Export row counts of all tables to CSV.

./pg1 export-counts [-w dir]

Output: output/table_row_counts.csv


export-constraints

Export all table constraints as DROP / ADD SQL scripts.

./pg1 export-constraints [-w dir]

Output:

  • output/drop_constraints.sql
  • output/restore_constraints.sql

gen-count-sql

Generate a UNION ALL SQL to count rows in all tables at once.

./pg1 gen-count-sql [-w dir]

Output: output/count_all_tables.sql


update

Generate and execute an UPDATE SQL, then output before/diff/after CSVs and a metrics report.

./pg1 update TABLE --set "SET clause" --where "WHERE clause" [--dry-run] [-w dir]

Output: output/{run_id}/

  • run.sql — generated SQL
  • {table}_before.csv / {table}_after.csv / {table}_diff.csv
  • execute.log
# Execute
./pg1 update public.users --set "status = 'done'" --where "status = 'active'"

# Generate SQL only (no execution)
./pg1 update public.users --set "status = 'done'" --where "status = 'active'" --dry-run

analyze

Analyze the output directory from update and print a report.

./pg1 analyze OUT_DIR [--excel] [-w dir]
./pg1 analyze output/20260323_120000
./pg1 analyze output/20260323_120000 --excel

run-sql

Execute a SQL file (INSERT / UPDATE / DELETE / SELECT), capture before/after snapshots of the target table, and report. Table, WHERE clause, and key columns are auto-detected from the SQL and the database schema.

./pg1 run-sql SQL_FILE [OPTIONS]

Options

  • --table, -t TABLE — override auto-detected table
  • --where, -W CLAUSE — override auto-detected WHERE clause used for snapshots
  • --key, -k COLUMNS — override auto-detected primary key columns (comma-separated)
  • --dry-run, -n — print detected params and generated SQL without executing
  • --field-level, -f — include field-level diff (changed_only_cols.csv)
  • --excel, -e — generate Excel diff report (report.xlsx)
  • -w dir — working directory

Auto-detection:

  • table — parsed from UPDATE t / INSERT INTO t / DELETE FROM t / FROM t / WITH ... UPDATE t
  • where — parsed from the WHERE clause in the SQL
  • key — queried from information_schema primary key constraints

Output: output/{run_id}/

  • execute.sql — full generated psql script (BEGIN … COMMIT)
  • {table}_before.csv / {table}_after.csv
  • only_before.csv / only_after.csv / changed.csv
  • changed_only_cols.csv (with --field-level)
  • execute.log
  • report.xlsx (with --excel)

Excel sheets: summary / only_before / only_after / changed (yellow highlight on changed cells)

# Auto-detect everything
./pg1 run-sql update_users.sql

# Dry-run: show detected params + generated SQL
./pg1 run-sql update_users.sql --dry-run

# With field-level diff
./pg1 run-sql update_users.sql --field-level

# With Excel diff report
./pg1 run-sql update_users.sql --excel

diff-csv

Compare two CSVs and generate an Excel diff report. Supports multiple comparison modes:

./pg1 diff-csv CSV_A CSV_B --key COLUMNS [OPTIONS]

Options

  • --key, -k — comma-separated key columns to join rows (e.g. id,code) (required)
  • --out, -o — output Excel file path (default: ./output/diff_report.xlsx)
  • --keep — keep comparison tables after the run (for PostgreSQL mode)
  • --db, -d URL — use PostgreSQL for comparison (requires database connection)
  • --duckdb, -u — use DuckDB for in-memory comparison (requires duckdb)
  • --sqlite, -s — use SQLite for in-memory comparison (requires sqlite3)
  • -w dir — working directory

Comparison Modes

| Mode | Option | Description | |------|--------|-------------| | memory (default) | none | Uses pandas, no external dependencies | | PostgreSQL | --db | Uses PostgreSQL temp tables | | DuckDB | --duckdb | Uses DuckDB in-memory (requires pip install duckdb) | | SQLite | --sqlite | Uses SQLite in-memory |

Output Excel sheets:

  • summary — row count summary
  • only_in_a — rows only in CSV_A
  • only_in_b — rows only in CSV_B
  • changed — changed rows (columns shown as col_before / col_after, yellow highlight)
# Memory mode (pandas, default)
./pg1 diff-csv before.csv after.csv --key id

# SQLite mode
./pg1 diff-csv before.csv after.csv --key id --sqlite

# DuckDB mode
./pg1 diff-csv before.csv after.csv --key id --duckdb

# PostgreSQL mode
./pg1 diff-csv before.csv after.csv --key id -d "postgresql://user:pass@host/db"

# Output to custom path
./pg1 diff-csv a.csv b.csv --key "id,code" --out output/diff.xlsx