@aicore/libpostgres
v1.0.0
Published
PostgreSQL Library used by core.ai to write to DB - drop-in replacement for libmysql
Downloads
63
Readme
@aicore/libpostgres
PostgreSQL drop-in replacement for @aicore/libmysql. Swap the import and your application runs on PostgreSQL with zero code changes.
Why
- Native JSONB -- PostgreSQL stores JSON in binary format. Reads are faster (no re-parsing), and GIN indexes cover entire documents without generated column workarounds.
- No prepared statement leak -- The
pgdriver uses the extended query protocol. ThemaxPreparedStatementsceiling that libmysql needs to prevent server-side statement accumulation does not apply. - Expression indexes -- PostgreSQL indexes JSONB expressions directly. libmysql must create a generated column then index it (two DDL operations). libpostgres still uses generated columns for API compatibility, but the path to pure expression indexes is open.
- Correct boolean handling --
$.active = trueand$.active = falsework as expected. MySQL coerces the extracted string"true"to0, which causes$.active = 1to return zero results. libpostgres maps booleans correctly in both= true/falseand= 1/0forms.
Architecture
MySQL databases map to PostgreSQL schemas. The database.table naming convention used throughout cocodb translates directly to PostgreSQL's schema.table syntax, so every call site works unchanged.
MySQL PostgreSQL
───── ──────────
CREATE DATABASE test → CREATE SCHEMA "test"
test.customers → "test"."customers"
JSON column → JSONB column
GENERATED ALWAYS AS → GENERATED ALWAYS AS ... STORED
(virtual) (with explicit type cast for INT/DOUBLE)
SHOW DATABASES → SELECT schema_name FROM information_schema.schemata
SHOW TABLES FROM db → SELECT table_name FROM information_schema.tables
SHOW INDEX FROM table → pg_index + pg_class + pg_attribute joins
document->>"$.field" → "document"->>'field' (single level)
"document"#>>'{a,b}' (nested)
? placeholders → $1, $2, $3 ...
&& → AND
|| → OR
IFNULL(...) → COALESCE(...)
REGEXP / RLIKE → ~
LIMIT offset, count → LIMIT count OFFSET offsetInstallation
npm install @aicore/libpostgresRequires PostgreSQL 12+ (for GENERATED ALWAYS AS ... STORED columns).
Performance
Benchmarked on Azure VM (125GB RAM, SSD) with tuned PostgreSQL 16 vs MySQL 8. Full durability (synchronous_commit=on).
| Operation | PostgreSQL | MySQL | Winner | |---|---|---|---| | PUT (small doc) | 2.35ms | 8.58ms | PG 3.7x | | GET (small doc) | 0.059ms | 0.081ms | PG 1.4x | | UPDATE (small doc) | 2.34ms | 0.078ms | MySQL 30x | | UPDATE (large 16KB doc) | 2.74ms | 15.26ms | PG 5.6x | | DELETE (small doc) | 2.32ms | 9.52ms | PG 4.1x | | Bulk insert 50K | 2,036 ops/s | 570 ops/s | PG 3.6x | | Queries (5 types) | 1.5-2.1ms | 2.3-6.9ms | PG 1.1-4x | | Concurrent 100 workers | 6,519 ops/s | 1,694 ops/s | PG 3.9x | | mathAdd 100 workers | 414 ops/s | 104 ops/s | PG 4x |
PG wins on everything except small-doc UPDATE. The 30x UPDATE gap is PostgreSQL's MVCC architecture -- full-row copy + WAL fsync per commit vs MySQL's in-place update + delta redo log. See Performance Guide for root cause analysis.
Note: Setting
asyncCommit: truein config eliminates the UPDATE gap entirely (PG 0.069ms vs MySQL 0.079ms) by deferring WAL fsync. This is replication-safe but risks ~600ms of data loss on server crash. See Performance Guide for details.
Quick Start
import LibMySql from "@aicore/libpostgres";
// Same config shape as libmysql, plus optional `database` field
LibMySql.init({
host: "localhost",
port: "5432",
user: "myuser",
password: "mypassword",
database: "postgres" // defaults to "postgres" if omitted
});
await LibMySql.createDataBase("myapp");
await LibMySql.createTable("myapp.users");
const docId = await LibMySql.put("myapp.users", {
name: "Alice",
age: 30,
active: true
});
const doc = await LibMySql.get("myapp.users", docId);
console.log(doc);
// { name: 'Alice', age: 30, active: true, documentId: '...' }
// CocoDB query language works unchanged
const results = await LibMySql.query("myapp.users", "$.age >= 25 AND $.active = true");
LibMySql.close();API
Every function matches libmysql's signature exactly.
Connection
| Function | Description |
|----------|-------------|
| init(config, logger?) | Connect to PostgreSQL. Config: {host, port, user, password, database?, connectionLimit?} |
| close() | Close the connection pool |
Database / Table Management
| Function | Description |
|----------|-------------|
| createDataBase(name) | Create a schema |
| deleteDataBase(name) | Drop a schema (CASCADE) |
| createTable(name) | Create a table (documentID VARCHAR(32) PK + document JSONB) |
| deleteTable(name) | Drop a table (IF EXISTS CASCADE) |
| listDatabases() | List all user schemas |
| listTables(dbName) | List tables in a schema |
| getTableIndexes(tableName) | Get index metadata with JSON field reverse mapping |
Document CRUD
| Function | Description |
|----------|-------------|
| put(table, document) | Insert document, returns 32-char hex ID |
| get(table, documentID) | Get document by ID |
| update(table, documentID, document, condition?) | Replace document, optional CocoDB condition |
| deleteKey(table, documentID, condition?) | Delete by ID, optional condition |
| deleteDocuments(table, queryString, indexedFields?) | Bulk delete by CocoDB query |
Querying
| Function | Description |
|----------|-------------|
| query(table, queryString, indexedFields?, options?) | CocoDB query with optional pagination |
| getFromNonIndex(table, queryObject?, options?) | Scan query (no index) |
| getFromIndex(table, queryObject, options?) | Query using indexed generated columns |
Indexing
| Function | Description |
|----------|-------------|
| createIndexForJsonField(table, field, dataType, isUnique?, isNotNull?) | Create generated column + index |
| DATA_TYPES | { DOUBLE: 'DOUBLE PRECISION', VARCHAR: fn(n), INT: 'INT' } |
Field Operations
| Function | Description |
|----------|-------------|
| mathAdd(table, documentID, increments, condition?) | Atomic numeric increment on JSON fields |
CocoDB Query Language
The query language is unchanged from libmysql. $ references the JSON document, $.field references a field.
// Simple comparison
"$.age > 30"
// String matching
"$.name = 'Alice'"
"$.city LIKE 'New%'"
// Boolean (works correctly -- both forms supported)
"$.active = true"
"$.active = false"
"$.active = 1" // true
"$.active = 0" // false
// Logical operators
"$.age > 30 AND $.active = true"
"$.a = 1 OR $.b = 2"
"NOT($.deleted = true)"
"$.age > 20 && $.score < 100" // && maps to AND
"$.a = 1 || $.b = 2" // || maps to OR
// Nested fields
"$.location.city = 'Tokyo'"
"$.a.b.c.d = 'deep'"
// Functions
"ROUND($.price) = 20"
"UPPER($.name) = 'ALICE'"
"ABS($.balance) > 100"
// Range
"$.score BETWEEN 50 AND 100"
// Conditional operations
await update(table, docId, newDoc, "$.version = 3");
await deleteKey(table, docId, "$.status = 'archived'");
await mathAdd(table, docId, {views: 1}, "$.active = true");Using with cocodb
To swap libmysql for libpostgres in cocodb without modifying source:
# In libpostgres directory, create a shim:
mkdir -p libpostgres-shim/src
cat > libpostgres-shim/package.json << 'EOF'
{"name":"@aicore/libmysql","version":"1.0.50","type":"module","main":"src/index.js"}
EOF
cat > libpostgres-shim/src/index.js << 'EOF'
export { default } from '../../libpostgres/src/index.js';
export * from '../../libpostgres/src/utils/db.js';
EOF
cd libpostgres-shim && sudo npm link
cd /path/to/cocodb && npm link @aicore/libmysqlUpdate the config to use PostgreSQL:
{
"mysql": {
"host": "localhost",
"port": "5432",
"user": "pguser",
"password": "pgpassword",
"database": "postgres"
}
}No cocodb source files need to change.
Testing
Test Suites
| Suite | File | Tests | Description |
|-------|------|-------|-------------|
| Integration | libpostgres-test.spec.js | 51 | All API operations against PostgreSQL |
| Dual-DB | dual-db-test.spec.js | 7 | Same operations on MySQL + PostgreSQL, results compared |
| Formal Verification | formal-verification-test.spec.js | 100 | Edge cases across 9 categories, dual-DB comparison |
| cocodb HTTP API | cocodb api-test.spec.js | 35 | Full cocodb HTTP endpoint tests via libpostgres shim |
| cocodb WebSocket API | cocodb ws-test.spec.js | 38 | Full cocodb WebSocket endpoint tests via libpostgres shim |
| Total | | 231 | |
Running Tests
# Integration tests (requires local PostgreSQL)
PG_HOST=localhost PG_PORT=5432 PG_USER=testuser PG_PASSWORD=testpass PG_DATABASE=postgres \
npm run test:integ
# Dual-DB tests (requires both MySQL and PostgreSQL)
# Also needs libmysql at /path/to/libmysql
PG_HOST=localhost PG_PORT=5432 PG_USER=testuser PG_PASSWORD=testpass PG_DATABASE=postgres \
npx mocha test/integration/dual-db-test.spec.js --timeout=60000
# Formal verification (100 edge case tests, requires both databases)
PG_HOST=localhost PG_PORT=5432 PG_USER=testuser PG_PASSWORD=testpass PG_DATABASE=postgres \
npx mocha test/integration/formal-verification-test.spec.js --timeout=120000 --exitFormal Verification Categories
The 100-test formal verification suite runs each operation on both MySQL and PostgreSQL and compares results:
| Category | Tests | Covers | |----------|-------|--------| | A. JSON Value Types | 15 | null, 0, false, empty string/object/array, nested nulls, large integers, floats, negative zero, long strings, arrays | | B. Query Behavior | 20 | Case sensitivity, LIKE, trailing spaces, unicode, emoji, escaped quotes, BETWEEN, IS NULL, NOT, OR/AND, boolean comparison, nested fields, ROUND | | C. Scan/Index Queries | 10 | Falsy values, pagination, deep nesting, indexed nested fields, empty queries, multi-field filters | | D. mathAdd | 10 | Float precision, new fields, conditions, zero/negative increments, null fields, nested keys, 100 concurrent increments, large numbers | | E. Conditional Update/Delete | 10 | NULL conditions, AND/OR, non-existent fields, booleans, complex NOT, concurrency, empty docs, type changes, zero values | | F. Index/Generated Columns | 10 | Null docs, missing fields, unique NULL semantics, mixed types, deep nesting, metadata comparison, long values, drop/recreate, LIKE on indexed fields | | G. Stress Tests | 10 | 10-level nesting, 100 fields, 1MB docs, all JSON types, 10K bulk insert, 1K batch cycle, 100 rapid create/drop, nested arrays, special chars, unicode | | H. Error Parity | 10 | Error codes and messages for all failure modes match between MySQL and PostgreSQL | | I. Known Bugs | 5 | Nested mathAdd keys, LOG semantics, null bytes, field initialization |
Known Behavioral Differences
These are inherent differences between MySQL and PostgreSQL that cannot be resolved at the library level:
| Behavior | MySQL | PostgreSQL | Notes |
|----------|-------|------------|-------|
| JSON null extraction | ->> returns string "null" | ->> returns SQL NULL | Affects IS NULL, NOT() queries. Avoid storing JSON null values. |
| Case sensitivity | Case-insensitive (default collation) | Case-sensitive | 'alice' = 'Alice' matches in MySQL, not in PG. Use LOWER() for case-insensitive queries. |
| Trailing spaces | Ignored in comparison | Significant | 'Alice ' = 'Alice' matches in MySQL, not in PG. |
| ROUND half values | Rounds half away from zero | Banker's rounding (half to even) | ROUND(20.50) = 21 in MySQL, 20 in PG. |
| LOG function | Natural log (ln) | Base-10 log | LOG(100) = 4.605 in MySQL, 2.0 in PG. Use LN() for natural log in PG. |
| Null bytes | Accepted in JSON strings | Rejected (\u0000 not allowed in JSONB) | Avoid null bytes in string values. |
| Boolean = 0/= 1 | Broken (coerces "true" to 0) | Correct (true=1, false=0) | PG is more correct here. Use = true/= false for clarity. |
| mathAdd nested key "a.b" | Treats as nested path $.a.b | Treats as literal key "a.b" | Use flat keys or implement path splitting if needed. |
| Row ordering | Clustered index order | Heap order | Without ORDER BY, paginated results may differ. |
| Drop non-existent DB error code | ER_DB_DROP_EXISTS | ER_BAD_DB_ERROR | Minor error code difference. |
Project Structure
src/
index.js # Entry point (re-exports db.js as default)
utils/
db.js # All 20 public API functions (PostgreSQL implementation)
query.js # CocoDB query tokenizer + PostgreSQL SQL transformer
constants.js # Column names, data types, limits
sharedUtils.js # Field name validation, MD5 column naming
test/
integration/
libpostgres-test.spec.js # 51 integration tests
dual-db-test.spec.js # 7 dual-database comparison tests
formal-verification-test.spec.js # 100 edge case verification tests
setupIntegTest.js # PostgreSQL test configuration
unit/
setup-mocks.js # pg.Pool mock for unit testsTranslation Details
Query Transformer (query.js)
The CocoDB query language is parsed by the same tokenizer as libmysql. The difference is in the SQL generation pass:
Variable tokens (
$.field) are converted to PostgreSQL JSONB extraction operators with context-aware type casting:- Numeric context (comparison with number, inside math function, BETWEEN):
(CASE WHEN jsonb_typeof(...) = 'boolean' THEN ... ELSE (...)::numeric END) - Boolean context (comparison with
true/false):(...)::boolean - Text context (default):
"document"->>'field'
- Numeric context (comparison with number, inside math function, BETWEEN):
Operators
&&and||are mapped toANDandOR(PostgreSQL uses||for string concatenation).String escaping: Backslash-escaped quotes (
\') are converted to PostgreSQL's doubled-quote style ('') forstandard_conforming_strings=oncompatibility.Function mapping:
IFNULL->COALESCE,LCASE->LOWER,UCASE->UPPER,RAND->RANDOM,TRUNCATE->TRUNC.
Error Mapping (db.js)
PostgreSQL errors are wrapped in standard Error objects with MySQL-compatible error codes:
| PostgreSQL Code | MySQL Code | Condition |
|----------------|------------|-----------|
| 42P01 | ER_NO_SUCH_TABLE | Table/relation does not exist |
| 23505 | ER_DUP_ENTRY | Unique constraint violation |
| 23502 | ER_BAD_NULL_ERROR | NOT NULL constraint violation |
| 42P06 | ER_DB_CREATE_EXISTS | Schema already exists |
| 3F000 | ER_BAD_DB_ERROR | Schema does not exist |
Error messages are formatted to match MySQL's output (e.g., "Duplicate entry 'value' for key 'table.column'").
mathAdd (db.js)
MySQL's JSON_SET(document, '$.key1', val1, '$.key2', val2) accepts variadic key-value pairs. PostgreSQL's jsonb_set takes one path at a time, so multiple increments are nested:
-- MySQL
UPDATE t SET document = JSON_SET(document,
'$.age', IFNULL(JSON_EXTRACT(document, '$.age'), 0) + 2,
'$.score', IFNULL(JSON_EXTRACT(document, '$.score'), 0) + 10
) WHERE documentID = ?
-- PostgreSQL
UPDATE t SET "document" = jsonb_set(
jsonb_set(
"document",
'{age}',
to_jsonb(COALESCE(("document"->>'age')::numeric, 0) + 2)
),
'{score}',
to_jsonb(COALESCE(("document"->>'score')::numeric, 0) + 10)
) WHERE "documentID" = $1License
AGPL-3.0-or-later
