@nds-stack/bunql
v0.1.0-alpha.6
Published
Lightweight SQLite wrapper for Bun — queued writes, serialized transactions, SQLITE_BUSY handling
Maintainers
Readme
@nds-stack/bunql
Lightweight SQLite wrapper for Bun — queued writes, serialized transactions, SQLITE_BUSY handling.
Table of Contents
- Why bunql
- Design Goals
- When to Use
- When Not to Use
- Installation
- Quick Start
- Examples
- API
- Architecture
- Compared to Raw bun:sqlite
- Benchmarks
- Limitations
- Stability
- License
Why bunql
Problem: SQLite allows only one writer at a time. Concurrent writes produce SQLITE_BUSY errors. Developers must manually implement retry logic, queue writes, and serialize transactions — error-prone boilerplate that every SQLite project reinvents.
Solution: bunql wraps bun:sqlite with a WriteQueue that serializes all write operations. Reads remain parallel and lock-free (WAL mode). Transactions are serialized with automatic rollback. The result: safe concurrency with zero application-level retry logic.
const db = new BunQL("./app.db");
// 100 concurrent writes — safe by default, no SQLITE_BUSY
const writes = Array.from({ length: 100 }, (_, i) =>
db.run("INSERT INTO logs (message) VALUES (?)", [`log-${i}`])
);
await Promise.all(writes);Design Goals
- Minimal abstraction — A thin, transparent layer over
bun:sqlite. No magic. No ORM. - Zero-config concurrency — Writes are queued, reads are parallel. Out of the box.
- Production-first — Error chains preserved (
error.cause). Retry with backoff. Graceful shutdown. - Bun-native — Uses
bun:sqlite,Bun.sleep(),queueMicrotask. No Node.js polyfills. - Single-file mental model — One
BunQLinstance, one database connection. Predictable behavior.
When to Use
- You need SQLite with concurrent writes from a Bun application.
- You want serialized transactions without manual retry logic.
- You want a lightweight alternative to heavier database wrappers.
- You need embedded storage for a Bun service, CLI tool, or single-process server.
When Not to Use
| Scenario | Recommendation |
|----------|---------------|
| High write throughput (>1000/s) | Use PostgreSQL or MySQL. SQLite is single-writer. |
| Multi-process access | Use a client-server database, or coordinate via external locking. |
| Distributed systems | SQLite is embedded, not networked. Use a network database. |
| ORM features needed | Consider Drizzle or Kysely with the bun:sqlite driver. |
| Node.js / Deno runtime | bunql is Bun-only. Use better-sqlite3 for Node.js. |
Installation
bun add @nds-stack/bunqlQuick Start
import { BunQL } from "@nds-stack/bunql";
const db = new BunQL("./app.db");
// Create table
await db.run(
"CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)"
);
// Insert
await db.run("INSERT INTO users (name) VALUES (?)", ["Alice"]);
// Query (synchronous, uses statement cache)
const users = db.query<{ id: number; name: string }>(
"SELECT * FROM users WHERE name = ?",
["Alice"]
);
// → { rows: [{ id: 1, name: "Alice" }], columns: ["id", "name"], durationMs: 0.12 }
// Transaction (atomatically rolls back on error)
await db.transaction(async (tx) => {
await tx.run("INSERT INTO users (name) VALUES (?)", ["Bob"]);
await tx.run("INSERT INTO users (name) VALUES (?)", ["Charlie"]);
});
// Prepared statement (cached, reusable)
const stmt = db.prepare<{ id: number; name: string }, [string]>(
"SELECT * FROM users WHERE name = ?"
);
const bob = stmt.get("Bob");
// Batch (atomic multi-write transaction)
await db.batch([
{ sql: "INSERT INTO users (name) VALUES (?)", params: ["Dave"] },
{ sql: "INSERT INTO users (name) VALUES (?)", params: ["Eve"] },
]);
// Exec — multi-statement SQL (schema files, migrations)
await db.exec(`
CREATE TABLE IF NOT EXISTS audit (id INTEGER PRIMARY KEY, msg TEXT);
INSERT INTO audit VALUES (1, 'migration v2 applied');
`);
// Raw access — langsung ke bun:sqlite untuk PRAGMA kustom / VACUUM
db.raw.run("PRAGMA cache_size=-8000");
db.raw.run("VACUUM");
// Graceful shutdown
await db.close();Examples
Concurrent Writes
import { BunQL } from "@nds-stack/bunql";
const db = new BunQL("./app.db");
const writes = Array.from({ length: 100 }, (_, i) =>
db.run("INSERT INTO logs (message) VALUES (?)", [`event-${i}`])
);
await Promise.all(writes);
// All 100 writes succeed, serialized by the queue.Transaction with Error Recovery
import { BunQL, TransactionError } from "@nds-stack/bunql";
const db = new BunQL("./app.db");
try {
await db.transaction(async (tx) => {
await tx.run("UPDATE accounts SET balance = balance - 100 WHERE id = 1");
await tx.run("UPDATE accounts SET balance = balance + 100 WHERE id = 2");
});
} catch (error) {
if (error instanceof TransactionError) {
console.error("Transaction failed:", error.cause);
// error.cause contains the original error
}
}Event Monitoring
import { BunQL } from "@nds-stack/bunql";
const db = new BunQL("./app.db", {
retry: { maxRetries: 3 },
events: {
onBusy: (attempt, delayMs) => {
console.log(`Busy, retrying in ${delayMs}ms (attempt ${attempt + 1})`);
},
onDrain: () => console.log("Write queue drained"),
onError: (err) => console.error("Operation failed:", err),
},
hooks: {
beforeWrite: (sql) => console.log("Writing:", sql),
afterWrite: (sql, _params, ms) => console.log(` took ${ms.toFixed(1)}ms`),
},
});Exec (Multi-Statement SQL)
Muat file skema .sql yang berisi banyak perintah sekaligus:
import { BunQL } from "@nds-stack/bunql";
import { readFileSync } from "fs";
const db = new BunQL("./app.db");
// Load schema file — semua perintah dijalankan serial via WriteQueue
const schema = readFileSync("./schema.sql", "utf-8");
await db.exec(schema);Batch Inside Transaction
import { BunQL } from "@nds-stack/bunql";
const db = new BunQL("./app.db");
await db.transaction(async (tx) => {
await tx.batch([
{ sql: "INSERT INTO users (name) VALUES (?)", params: ["Alice"] },
{ sql: "INSERT INTO users (name) VALUES (?)", params: ["Bob"] },
]);
});Raw Database Access
Akses langsung ke instance Database dari bun:sqlite untuk PRAGMA atau operasi yang tidak di-cover API:
import { BunQL } from "@nds-stack/bunql";
import type { Database } from "bun:sqlite";
const db = new BunQL("./app.db");
// Dapatkan instance Database langsung
const raw: Database = db.raw;
raw.run("PRAGMA cache_size=-8000");
raw.run("PRAGMA synchronous=FULL");
raw.exec("VACUUM");Reader Pool (Parallel Reads)
Multiple read-only connections untuk parallel reads:
import { BunQL } from "@nds-stack/bunql";
// Pool of 3 read-only connections, round-robin
const db = new BunQL("./app.db", { readerPool: 3 });
// Reads otomatis terdistribusi — parallel safe
const users = db.query("SELECT * FROM users");
const posts = db.query("SELECT * FROM posts");
await db.close();FTS5 Full-Text Search
Full-text search via built-in SQLite FTS5 (tanpa dependensi tambahan):
import { BunQL } from "@nds-stack/bunql";
const db = new BunQL("./app.db");
// Setup
await db.fts.create("articles", ["title", "body"]);
// Insert
await db.fts.insert("articles", {
title: "Hello SQLite",
body: "SQLite FTS5 is a powerful full-text search engine",
});
// Search with ranking + snippet
const results = db.fts.search("articles", "sqlite", {
limit: 10,
snippet: { startTag: "<b>", endTag: "</b>" },
});
// Index maintenance
await db.fts.optimize("articles");
await db.fts.rebuild("articles");
await db.fts.drop("articles");Maintenance & Auto-Scheduling
import { BunQL } from "@nds-stack/bunql";
const db = new BunQL("./app.db", {
maintenance: {
checkpoint: { enabled: true, pagesThreshold: 1000, mode: "TRUNCATE" },
vacuum: { enabled: true, mode: "incremental", pagesPerStep: 100 },
backup: { enabled: true, intervalMs: 86_400_000, path: "./backups/" },
},
slowQueryThreshold: 100, // ms — log queries slower than this
events: {
onSlowQuery: (sql, ms) => console.warn(`Slow query (${ms}ms):`, sql),
},
});Vacuum
import { BunQL } from "@nds-stack/bunql";
const db = new BunQL("./app.db");
// Full vacuum (blocking)
await db.vacuum();
// Incremental vacuum (non-blocking, page-at-a-time)
const result = await db.vacuum({ incremental: true, pagesPerStep: 100 });
console.log(`Reclaimed ${result.pagesReclaimed} pages`);API
Constructor
new BunQL(path: string, options?: BunQLOptions)| Option | Type | Default | Description |
|--------|------|---------|-------------|
| wal | boolean | true | Enable WAL journal mode |
| readonly | boolean | false | Open in read-only mode |
| busyTimeout | number | 5000 | SQLite busy timeout (ms) |
| synchronous | 'OFF' \| 'NORMAL' \| 'FULL' \| 'EXTRA' | 'NORMAL' | Synchronous mode (NORMAL recommended for WAL) |
| cacheSize | number | -2000 | Page cache size (negative = KB, -2000 = 2MB) |
| foreignKeys | boolean | true | Enforce FOREIGN KEY constraints |
| retry | RetryConfig | — | Retry policy for SQLITE_BUSY |
| readerPool | number | 0 | Number of read-only connections for parallel reads (0 = disabled) |
| maintenance | MaintenanceConfig | — | Auto-scheduler for checkpoint, vacuum, backup, integrity check |
| slowQueryThreshold | number | 0 | Slow query threshold in ms (0 = disabled). Triggers onSlowQuery event |
| pragma | { autoVacuum? } | — | PRAGMA options like autoVacuum |
| logger | Logger | — | Logger (console-compatible) |
| hooks | BunQLHooks | — | Lifecycle callbacks |
| events | EventHandlers | — | Event handlers (includes onSlowQuery) |
RetryConfig
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| maxRetries | number | 5 | Maximum retry attempts |
| baseDelay | number | 50 | Base delay (ms). Actual delay: baseDelay × 2^attempt |
| maxDelay | number | 1000 | Maximum delay cap |
| jitter | boolean | true | Random ±50% jitter on delay |
Methods
| Method | Returns | Description |
|--------|---------|-------------|
| query(sql, params?) | QueryResult<T> | Read query. Parallel-safe, uses statement cache. |
| run(sql, params?) | Promise<RunResult> | Write query. Serialized via queue, with retry. |
| transaction(callback) | Promise<T> | Serialized transaction. Auto-rollback on error. |
| prepare(sql) | Statement<T, P> | Cached prepared statement. |
| batch(operations) | Promise<RunResult[]> | Atomic multi-write transaction. |
| exec(sql) | Promise<void> | Multi-statement SQL (schema files, migrations). Serialized via queue. |
| walStatus() | Promise<WalStatus> | WAL file size, page info, checkpoint requirement. |
| checkpoint(mode) | Promise<CheckpointResult> | Explicit WAL checkpoint (PASSIVE | FULL | RESTART | TRUNCATE). |
| backup(path) | Promise<BackupResult> | Online backup via VACUUM INTO. Safe, queue-aware. |
| raw | Database | Getter — akses langsung ke instance bun:sqlite. |
| fts | FTS5Helper | Getter — FTS5 search helper (create, search, insert, delete, update, rebuild, merge, optimize, drop). |
| metrics | BunQLMetrics | Getter — real-time operation counters (writes, reads, txs, queue). |
| cacheStats | CacheStats | Getter — statement cache hit/miss/size/rate. |
| vacuum(opts?) | Promise<VacuumResult> | Full or incremental vacuum. Returns reclaimed pages count. |
| close() | Promise<void> | Graceful shutdown. Drains queue, finalizes statements, closes DB. |
Result Types
interface QueryResult<T> {
rows: T[]; // Result rows
columns: string[]; // Column names
durationMs: number; // Query execution time (ms)
}
interface RunResult {
changes: number; // Rows modified
lastInsertRowid: number | bigint | null; // Last inserted row ID
durationMs: number; // Execution time (ms)
}
interface Statement<T, P extends unknown[]> {
all(...params: P): T[];
get(...params: P): T | undefined;
run(...params: P): Promise<RunResult>;
finalize(): void;
}
interface BunQLMetrics {
writes: { total: number; failed: number; retried: number };
reads: { total: number };
queue: { currentSize: number; peakSize: number; totalEnqueued: number };
transactions: { committed: number; rolledBack: number };
}
interface CacheStats {
size: number;
hits: number;
misses: number;
hitRate: number;
}
interface WalStatus {
walSizePages: number;
pageSize: number;
pageCount: number;
checkpointRequired: boolean;
lastCheckpointPages: number;
}
interface CheckpointResult {
pagesCheckpointed: number;
walSizeBytes: number;
}
interface BackupResult {
size: number;
durationMs: number;
}
interface VacuumResult {
pagesReclaimed: number;
durationMs: number;
}
interface FTSResult {
rank: number;
[column: string]: unknown;
}Architecture
┌──────────────────────────────────────────────────────────┐
│ User Code │
│ db.query() db.run() db.exec() db.transaction() raw │
└──────┬──────────┬──────────┬───────────────┬─────────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌──────────┐ ┌────────────┐ ┌──────────────┐ ┌──────────┐
│ Statement │ │ WriteQueue │ │ Transaction │ │ raw │
│ Cache │ │ (FIFO) │ │ Manager │ │ (getter) │
│ (LRU/100)│ │ (O(1) │ │ +SAVEPOINT │ │ direct │
│ │ │ deque) │ │ │ │ access │
└────┬─────┘ └─────┬──────┘ └──────┬───────┘ └────┬─────┘
│ │ │ │
└──────────────┴────────────────┴───────────────┘
│
▼
┌─────────────────┐
│ bun:sqlite │
│ (WAL mode) │
│ + PRAGMA opts │
└─────────────────┘Write Flow
run()enqueues operation into WriteQueue (FIFO)- Queue processes one operation at a time (microtask-deferred)
- Each write passes through RetryPolicy (exponential backoff for SQLITE_BUSY)
- Retries exhausted →
BusyErrorwith original error ascause
Transaction Flow
transaction()enters WriteQueue (serialized with writes)BEGIN IMMEDIATE— prevents concurrent writers- Callback receives
TransactionContextwithrun()/query()/batch()/prepare() - Success →
COMMIT. Failure →ROLLBACK(original error incause) - Nested transactions use SQLite SAVEPOINT for isolation
Key Design Decisions
| Decision | Rationale |
|----------|-----------|
| Single DB connection | SQLite is single-writer. Multiple connections don't help writes. |
| WAL mode default | Enables concurrent reads during writes. |
| Reads bypass queue | Reads execute directly — never blocked by writes. |
| raw getter exposed | Users need escape hatch for PRAGMA kustom, VACUUM, dll. |
| Linked-list queue | yocto-queue untuk O(1) dequeue, bukan Array.shift() O(n). |
| Microtask-deferred queue | All synchronous enqueues complete before processing starts. |
| Error chain preserved | error.cause always contains the original error. |
Compared to Raw bun:sqlite
| Aspect | bun:sqlite | @nds-stack/bunql |
|--------|-------------|-------------------|
| API surface | Low-level, direct | Same SQL, added convenience |
| Write concurrency | Manual retry needed | Automatic queue + retry |
| Transactions | Manual BEGIN/COMMIT | Scoped callbacks with auto-rollback |
| Error handling | Raw SQLite errors | Typed BunQLError hierarchy with cause |
| Reads | Direct | Cached (LRU, max 100) |
| Prepared stmts | Manual manage | Auto-cached, reused |
| Graceful shutdown | Manual | Queue drain + cache finalize |
| Bundle size | Built-in | +22.7KB core / +6.5KB server |
bunql is not a replacement for bun:sqlite — it's a safety layer on top. You still write raw SQL. The wrapper handles what developers consistently get wrong: concurrency, error recovery, and resource cleanup.
Benchmarks
Environment: Bun v1.3.13, Windows x64, 500 iterations per test.
Both benchmarks use identical PRAGMA settings: WAL, synchronous=NORMAL, cache_size=-2000, foreign_keys=ON.
Results may vary ±30% between runs due to system load and disk caching.
Synthetic Throughput
| Operation | Raw bun:sqlite | @nds-stack/bunql | Overhead |
|-----------|-----------------|--------------------|----------|
| Point read | 220K ops/s | 180K ops/s | -18% |
| Single write | 25K ops/s | 20K ops/s | -20% |
| 10 concurrent writes | 45K ops/s * | 30K ops/s | -33% |
| 50 concurrent writes | 22K ops/s * | 18K ops/s | -18% |
* Raw concurrent benchmark includes manual retry logic with exponential backoff (same strategy as BunQL). Without retry, raw
bun:sqlitewould throwSQLITE_BUSY. BunQL eliminates the need for manual retry entirely — writes are serialized, reads are parallel. The ~20% overhead is the cost of guaranteed-safe concurrency.
Realistic Workloads
| Workload | Description | Throughput | |----------|-------------|-----------| | Mixed | Interleaved reads/writes/transactions | 28K ops/s | | Batch | 25 writes per transaction (10 batches) | 200K ops/s | | Cache pressure | 200 unique queries (triggers evictions) | 28K ops/s |
Limitations
- SQLite single-writer — bunql queues writes, but peak throughput depends on PRAGMA settings. With
synchronous=NORMAL,cache_size=-2000, and statement cache, typical hardware achieves 18-30K writes/s. Usingsynchronous=FULL(SQLite default) reduces this significantly. - Fixed-size statement cache — Max 100 cached statements. Highly diverse workloads trigger evictions.
- Single-process only — Not designed for multi-process writes to the same SQLite file.
- Not an ORM — No schema management, query building, or migrations. You write SQL.
Stability
- 111 tests — unit, integration, concurrency, stress, FTS5, reader pool
- 5000 sequential writes — verified stable
- Graceful shutdown — drain queue → finalize statements → close DB
- Memory safe — LRU cache eviction,
yocto-queuelinked-list, no unbounded growth - Retry strategy — exponential backoff with ±50% jitter (baseDelay 50ms)
- Observability — built-in metrics counters, cache stats, WAL monitoring
License
MIT — see LICENSE.
Part of the @nds-stack collection of Bun-native tools.
