byodb
v0.5.0
Published
Browser-side SQLite with OPFS persistence and optional libSQL sync
Readme
byodb
Browser-side SQLite with OPFS persistence. Optional sync against a user-supplied libSQL URL. No build step required.
Status
Published on npm as byodb. Local-only persistence via SQLite WASM + OPFS SAH Pool VFS, table-level change notifications on exec and committed transactions, optional last-write-wins sync against any libSQL HTTP server, and db.export() / db.import() for full-file backup. Convergence verified end-to-end against a real sqld instance. Outstanding work tracked in TODO.md.
Stability
Alpha — usable, but read this first.
What works and has been exercised:
- Everything in the API below:
exec,query,transaction,subscribe,sync,onSyncStatus,export,import,close. - 88 unit tests, all passing.
- Persistence across reload in Chromium (OPFS SAH Pool).
- Real push/pull convergence against a live
sqld. - Single-file drop-in bundle at
dist/byodb.bundle.jswith the worker inlined — see Install.
What is not yet battle-tested:
- Only tested in Chromium. Firefox and Safari 16.4+ are supported in theory (both ship OPFS SAH), but I haven't run the demo there. Expect small fixes.
- No checked-in Playwright suite yet.
npm run test:e2eis wired up but the tests aren't written — all browser verification so far has been manual driving via an MCP. - Schema constraints for sync: tables must have a rowid (no
WITHOUT ROWID, no composite primary keys). Identifiers must match/^[A-Za-z_][A-Za-z0-9_]{0,63}$/. - One database per browsing context — the OPFS SAH Pool VFS is a process-wide singleton.
- Clock skew across peers affects last-write-wins conflict resolution —
changed_atis the local client's clock.
If you're building a side project or an internal tool, go ahead. If you're shipping to customers, pin the version, run it against your target browsers yourself, and expect to patch things.
Install
Two ways to use byodb:
1. As an npm dependency (bundled apps)
npm install byodbThe worker entry (src/worker.js) resolves @sqlite.org/sqlite-wasm as a normal bare specifier — Vite, webpack, esbuild, Rollup, Parcel all handle it transparently.
2. As a single-file drop-in (no build step)
After running npm run build, dist/byodb.bundle.js contains the main-thread library with the worker inlined as a Blob URL. Drop that one file into your site and import it:
<script type="module">
import { openDatabase } from './byodb.bundle.js';
const db = await openDatabase({ name: 'my-app', schema: '...' });
</script>The bundle loads @sqlite.org/sqlite-wasm from esm.sh at worker startup — no WASM is bundled. Size: ~32 KB gzipped-ish. A working example lives at examples/bundle/.
Usage
import { openDatabase } from 'byodb';
const db = await openDatabase({
name: 'my-app',
schema: `CREATE TABLE IF NOT EXISTS notes (
id INTEGER PRIMARY KEY,
body TEXT NOT NULL,
updated_at INTEGER NOT NULL
)`,
});
await db.exec(
'INSERT INTO notes (body, updated_at) VALUES (?, ?)',
['hello', Date.now()],
);
const rows = await db.query('SELECT * FROM notes ORDER BY updated_at DESC');
await db.transaction(async (tx) => {
await tx.exec('UPDATE notes SET body = ? WHERE id = ?', ['edited', 1]);
await tx.exec('DELETE FROM notes WHERE id = ?', [2]);
});
const unsubscribe = db.subscribe('notes', () => console.log('notes changed'));
await db.close();API
openDatabase({ name, schema?, workerUrl?, namespace?, requestStorageOnGesture? }): Promise<Database>— opens (or creates) an OPFS-backed SQLite database named${name}.sqlite.schemaruns once on open as plain SQL (CREATE TABLE IF NOT EXISTSrecommended).workerUrloverrides the default worker location.namespaceprefixes every table in your SQL with${namespace}__, letting you host multiple logical databases in one OPFS file (see Multiple databases per origin).requestStorageOnGesture: truedefers thenavigator.storage.persist()prompt until you calldb.requestStoragePermission()from a user gesture (see User-triggered storage permission).db.exec(sql, params?): Promise<{ changes, lastInsertRowid }>— runs a single statement, returns write metadata.db.query(sql, params?): Promise<Row[]>— runs a read, returns row objects keyed by column name.db.transaction(fn): Promise<T>— runsfn(tx)between BEGIN and COMMIT. Throw to roll back.txhasexecandquerywith the same signatures.db.subscribe(table, cb): () => void— callscbafter anyexecthat targetstablecommits. Returns an unsubscribe. Table-level only — triggers and views are not tracked.db.sync(): Promise<{push, pull}>— manually push pending writes and pull remote changes. Only available when opened with asyncoption.db.onSyncStatus(cb): () => void— subscribe to sync state transitions.cbis called synchronously once at registration and on every transition with{ state: 'idle'|'syncing'|'error', pendingPush, lastSyncedAt, lastError }. Returns an unsubscribe.db.export(): Promise<Blob>— dump the entire SQLite file as aBlobtaggedapplication/vnd.sqlite3. Useful for user-initiated backups or "download my data" flows. See Backup and restore.db.import(source): Promise<void>— replace the entire database with the contents ofsource(aBlob,ArrayBuffer, orUint8Array). The handle stays usable: the schema is re-applied and sync is re-initialised if configured. Throws if a transaction is in flight.db.requestStoragePermission(): Promise<{ persisted: boolean }>— call from a user-gesture handler (e.g. a button click) to promptnavigator.storage.persist()at a moment the user expects. Only relevant when opened withrequestStorageOnGesture: true; safe to call either way. Opens the internal gate that queues DB calls under that mode, regardless of grant or denial.db.close(): Promise<void>— flushes, terminates the worker, rejects pending calls, cancels any sync interval.
Sync option
sync: {
url: 'http://nas:3001', // libSQL HTTP endpoint
authToken: '…', // optional; sent as Authorization: Bearer …
interval: 15_000, // optional; tick every N ms with exponential backoff on error
syncOnMutation: false, // optional; default true — set false to disable the background sync after every successful exec/committed transaction
}Errors thrown by db.sync() use the typed hierarchy SyncError → AuthError (401/403) / NetworkError (fetch rejected) / ServerError (5xx, with .status). An AuthError during the interval pauses the loop; call db.sync() again after refreshing the token to resume.
By default, every successful exec and committed transaction kicks off a fire-and-forget sync. Rapid mutations while a sync is already running coalesce into a single trailing sync — you won't get one network round-trip per insert. The mutation promise does not wait for the sync; check onSyncStatus if you need to know when remote convergence has happened. Auto-sync is suppressed while AuthError has paused the scheduler — resume it by calling db.sync() with a fresh token. Combine with interval if you also want periodic pulls of remote changes (the auto-sync covers your local writes, the interval covers "someone else wrote on another device"). Pass syncOnMutation: false to disable and trigger sync manually with db.sync().
Backup and restore
db.export() hands you the whole database as a standard SQLite file, suitable for saving to disk, uploading to S3, or round-tripping through sqlite3 on the command line:
const blob = await db.export();
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'my-app.sqlite';
a.click();
URL.revokeObjectURL(url);db.import(blob) replaces the current database with an uploaded file. The in-memory db handle stays valid — schema and sync setup are re-applied automatically — so the usual pattern is a single file input:
fileInput.addEventListener('change', async (e) => {
const file = e.target.files[0];
if (!file) return;
await db.import(file); // file is a Blob
// subscribers fire for every user table; re-render your UI
});Notes:
- Import rejects if a transaction is in flight. Await any
db.transaction(...)first. - When
syncis configured, importing effectively "forks" the local clone. The imported file's_sync_metastate governs what gets pushed next; rows from a non-sync export won't be flagged for push. Expect the nextdb.sync()to behave as if the imported data had always been local. - The exported file contains every table, including the
_sync_*internal tables when sync is configured.
Multiple databases per origin
Each openDatabase({ name }) call still maps to one OPFS file, and only one
can be open per page (see Known limitations). If you
want to partition data within a single file — for example, a notes table
in an archive database that coexists with a notes table in a current
database — pass a namespace:
const current = await openDatabase({ name: 'my-app', namespace: 'current', schema: '...' });
// Later, on a different page or after current.close():
const archive = await openDatabase({ name: 'my-app', namespace: 'archive', schema: '...' });On disk, your tables are stored as current__notes and archive__notes.
Your SQL stays the same (SELECT * FROM notes works in both); byodb
rewrites the table names transparently. Subscribers see the unprefixed
logical names too.
Namespaces must match /^[A-Za-z_][A-Za-z0-9_]*$/, must not start with
_sync_, and must not contain __. Cross-namespace joins are not
supported — SQL in one namespace only sees its own tables. sqlite_* and
_sync_* tables are never prefixed.
User-triggered storage permission
By default, openDatabase fires navigator.storage.persist() immediately
after opening. Most browsers grant or deny silently based on site
engagement, but some prompt the user — and that prompt lands whenever the
library opens, which may surprise the user.
Pass requestStorageOnGesture: true to defer the request. On open, byodb
checks navigator.storage.persisted():
- If storage is already persistent, nothing changes — all DB calls flow through as normal.
- Otherwise, byodb closes an internal gate. Mutating calls —
exec,transaction,sync,import— queue until the gate opens. Read-only calls (query,export,subscribe) pass through immediately: storage is best-effort either way and the DB works without persistence. Your app wires a button todb.requestStoragePermission(), which runsnavigator.storage.persist()inside the user gesture, opens the gate, and flushes the queue in FIFO order.
const db = await openDatabase({
name: 'my-app',
schema: '...',
requestStorageOnGesture: true,
});
// These queue until the user clicks the button.
db.exec('INSERT INTO notes (body) VALUES (?)', ['hello']);
document.querySelector('#enable-storage').addEventListener('click', async () => {
const { persisted } = await db.requestStoragePermission();
// Queue flushes whether persisted is true or false — storage is
// best-effort either way; denial just means the browser may evict it.
});subscribe is synchronous callback registration and is not gated.
Known limitations
- One database per origin: the OPFS SAH Pool VFS is a shared singleton. Opening a second
openDatabasewhile the first is still open throwsNoModificationAllowedError. - Tables must have a rowid:
WITHOUT ROWIDand composite primary keys are not supported for sync in v1; the trigger installation usesNEW.rowid/OLD.rowid. - Clock skew between peers affects LWW conflict resolution —
changed_atis the local client's clock.
Configuring a libSQL server for sync (M3 preview)
Sync is not wired up yet, but when it ships in M3 it will talk to any libSQL HTTP endpoint you control. The server needs two things: it must accept HTTP requests from libSQL clients, and it must return CORS headers the browser will accept.
Run sqld locally
sqld is the reference libSQL server. The fastest way to get one up is the official Docker image:
docker run --rm -it \
-p 8080:8080 \
-e SQLD_NODE=primary \
-e SQLD_HTTP_LISTEN_ADDR=0.0.0.0:8080 \
-v $PWD/sqld-data:/var/lib/sqld \
ghcr.io/tursodatabase/libsql-server:latestYour sync URL is then http://localhost:8080.
A minimal docker-compose.yml equivalent:
services:
sqld:
image: ghcr.io/tursodatabase/libsql-server:latest
ports: ['8080:8080']
environment:
SQLD_NODE: primary
SQLD_HTTP_LISTEN_ADDR: 0.0.0.0:8080
volumes:
- ./sqld-data:/var/lib/sqldAuth tokens
For anything past local dev, set SQLD_AUTH_JWT_KEY (or -SQLD_AUTH_JWT_KEY_FILE) and pass the signed JWT as the authToken option to openDatabase. Without a key, sqld accepts unauthenticated requests — only OK for localhost.
CORS headers (required for browser clients)
A browser will refuse the fetch without these. sqld itself does not set them — put it behind a reverse proxy (Caddy, nginx, Cloudflare Worker) that adds:
Access-Control-Allow-Origin: https://your-app.example.com
Access-Control-Allow-Headers: authorization, content-type
Access-Control-Allow-Methods: POST, OPTIONS
Access-Control-Max-Age: 86400And it must respond to the OPTIONS preflight with 204 + the same headers.
Caddy one-liner:
libsql.your-domain.com {
@preflight method OPTIONS
handle @preflight {
header Access-Control-Allow-Origin "https://your-app.example.com"
header Access-Control-Allow-Headers "authorization, content-type"
header Access-Control-Allow-Methods "POST, OPTIONS"
header Access-Control-Max-Age "86400"
respond 204
}
reverse_proxy localhost:8080 {
header_down Access-Control-Allow-Origin "https://your-app.example.com"
header_down Access-Control-Allow-Credentials "true"
}
}Use the exact origin of your app —
*is rejected by browsers when credentials are attached.
Wiring into openDatabase
const db = await openDatabase({
name: 'my-app',
schema: `CREATE TABLE IF NOT EXISTS notes (...)`,
sync: {
url: 'https://libsql.your-domain.com',
authToken: localStorage.getItem('syncToken'),
interval: 60_000,
},
});If interval is set, the client pulls + pushes on that cadence; otherwise call db.sync() manually. Conflict policy is last-write-wins keyed on changed_at.
Recipes
Things that are trivial in user-land and therefore deliberately not part of the core API.
Export as JSON
db.export() returns a binary SQLite file — the universal, lossless format. When you need a JSON dump instead (product requirement to "download as JSON", uploading to a JSON-native endpoint, diffable snapshots in tests), iterate sqlite_master yourself:
async function exportJson(db) {
const tables = await db.query(
`SELECT name FROM sqlite_master
WHERE type = 'table'
AND name NOT LIKE 'sqlite\\_%' ESCAPE '\\'
AND name NOT LIKE '\\_sync\\_%' ESCAPE '\\'`,
);
const dump = { version: 1, exportedAt: new Date().toISOString(), tables: {} };
for (const { name } of tables) {
dump.tables[name] = await db.query(`SELECT * FROM "${name}"`);
}
return new Blob([JSON.stringify(dump, null, 2)], { type: 'application/json' });
}Caveats you own when you do this:
- BLOB columns need base64 (
btoa(String.fromCharCode(...row.col))) or they'll JSON-stringify to{}. - Integers past
Number.MAX_SAFE_INTEGERlose precision — cast to TEXT in the query (CAST(big_id AS TEXT) AS big_id) and parse client-side. - Schema is not preserved. Types, constraints, indexes, triggers, and views are gone. Re-importing requires you to recreate the schema separately.
- Not round-trip-symmetric with
db.import(). Usedb.export()for true backup/restore; use JSON for "send a copy of my data" and similar one-way flows.
Restore from JSON
The mirror of above — rebuild a database from a JSON dump:
async function importJson(db, dump) {
await db.transaction(async (tx) => {
for (const [table, rows] of Object.entries(dump.tables)) {
if (!rows.length) continue;
const cols = Object.keys(rows[0]);
const placeholders = cols.map(() => '?').join(', ');
const colList = cols.map((c) => `"${c}"`).join(', ');
for (const row of rows) {
await tx.exec(
`INSERT OR REPLACE INTO "${table}" (${colList}) VALUES (${placeholders})`,
cols.map((c) => row[c]),
);
}
}
});
}Assumes the tables already exist (run your schema first, or pass it when opening the DB).
Running the notes demo
From the repository root:
npm install
npx --yes serve -p 8080 .
# open http://localhost:8080/examples/notes/Tests
npm testCovers the pure-JS pieces (RPC layer, SQL parser, event notifier). Playwright-driven e2e against real browsers is on the M1 follow-up list.
Releases
Releases are automated via semantic-release. Every push to main triggers .github/workflows/release.yml, which:
- Installs deps, runs
npm test, runsnpm run build. - Analyzes commit messages since the last tag to decide the next version (see conventional commits below).
- Updates
package.json/package-lock.json, writesCHANGELOG.md, publishes to npm, creates a GitHub release, and pushes the version commit + tag back tomain.
Nothing ships without a release-worthy commit — if the commits only contain chore:, docs:, test: etc., the workflow is a no-op.
Conventional commits
Commit messages drive version bumps. The prefixes that matter:
fix: …→ patch (0.4.0→0.4.1)feat: …→ minor (0.4.0→0.5.0)feat!: …or any commit with aBREAKING CHANGE:footer → major (0.4.0→1.0.0)chore:,docs:,refactor:,test:,build:,ci:,style:→ no release
Squash-merging PRs? Edit the squash commit message before merging so it matches one of the above — GitHub's default title is usually the PR title, which is often already conventional.
Secrets required
Set these in the repo's Settings → Secrets and variables → Actions:
NPM_TOKEN— an npm automation token with publish rights to thebyodbpackage.GITHUB_TOKENis provided automatically.
First release
If the repo has no v* tags, semantic-release will start from 1.0.0 on the first release. To keep the current 0.4.x track, tag the current tip of main as v0.4.0 before merging anything that would trigger a release:
git tag v0.4.0 main
git push origin v0.4.0Roadmap
See PLAN.md for the full plan.
- M1 (done): local-only MVP — open, exec, query, transactions, subscribe.
- M2 (done): table-level change notifications wired through exec and transaction commit; verified in unit tests and via Playwright against the notes demo.
- M3 (done): libSQL HTTP sync with
_sync_metacapture triggers, last-write-wins via_sync_logon the remote,onSyncStatuscallback, interval + exponential backoff, typed error taxonomy. Convergence verified against a realsqld. - M4 (in progress): single-file bundle landed. Remaining:
db.export()/db.import(),db.storageEstimate(), iOS fallback messaging.
License
MIT
