npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@remy/origin-sql

v0.3.0

Published

Browser-side SQLite with OPFS persistence and optional libSQL sync

Readme

origin-sql

Browser-side SQLite with OPFS persistence. Optional sync against a user-supplied libSQL URL (M3). No build step required.

Status

M3 milestone. Local-only persistence via SQLite WASM + OPFS SAH Pool VFS, table-level change notifications on exec and committed transactions, and optional last-write-wins sync against any libSQL HTTP server. Convergence verified end-to-end against a real sqld instance.

Stability

Alpha — usable, but read this first.

What works and has been exercised:

  • Everything in the API below: exec, query, transaction, subscribe, sync, onSyncStatus, close.
  • 73 unit tests, all passing.
  • Persistence across reload in Chromium (OPFS SAH Pool).
  • Real push/pull convergence against a live sqld.

What is not yet battle-tested:

  • Not published to npm. The package name origin-sql is unclaimed and the code is not on the registry. To consume it today, add it via a git URL or a "file:" path in your app's package.json:
    "dependencies": {
      "origin-sql": "github:your-user/libsql-client#branch-name"
    }
    Or vendor the src/ + index.js directly into your project.
  • Only tested in Chromium. Firefox and Safari 16.4+ are supported in theory (both ship OPFS SAH), but I haven't run the demo there. Expect small fixes.
  • No checked-in Playwright suite yet. npm run test:e2e is wired up but the tests aren't written — all browser verification so far has been manual driving via an MCP.
  • ~~No single-file bundle~~. Built via npm run builddist/origin-sql.bundle.js, with the worker inlined. See the Install section.
  • Schema constraints for sync: tables must have a rowid (no WITHOUT ROWID, no composite primary keys). Identifiers must match /^[A-Za-z_][A-Za-z0-9_]{0,63}$/.
  • One database per browsing context — the OPFS SAH Pool VFS is a process-wide singleton.
  • Clock skew across peers affects last-write-wins conflict resolution — changed_at is the local client's clock.

If you're building a side project or an internal tool, go ahead. If you're shipping to customers, pin the commit, run it against your target browsers yourself, and expect to patch things.

Install

Two ways to use origin-sql:

1. As an npm dependency (bundled apps)

Not on npm yet (see Stability). For now, install from git:

npm install github:your-user/libsql-client#branch-name

The worker entry (src/worker.js) resolves @sqlite.org/sqlite-wasm as a normal bare specifier — Vite, webpack, esbuild, Rollup, Parcel all handle it transparently.

2. As a single-file drop-in (no build step)

After running npm run build, dist/origin-sql.bundle.js contains the main-thread library with the worker inlined as a Blob URL. Drop that one file into your site and import it:

<script type="module">
  import { openDatabase } from './origin-sql.bundle.js';
  const db = await openDatabase({ name: 'my-app', schema: '...' });
</script>

The bundle loads @sqlite.org/sqlite-wasm from esm.sh at worker startup — no WASM is bundled. Size: ~32 KB gzipped-ish. A working example lives at examples/bundle/.

Usage

import { openDatabase } from 'origin-sql';

const db = await openDatabase({
  name: 'my-app',
  schema: `CREATE TABLE IF NOT EXISTS notes (
    id INTEGER PRIMARY KEY,
    body TEXT NOT NULL,
    updated_at INTEGER NOT NULL
  )`,
});

await db.exec(
  'INSERT INTO notes (body, updated_at) VALUES (?, ?)',
  ['hello', Date.now()],
);

const rows = await db.query('SELECT * FROM notes ORDER BY updated_at DESC');

await db.transaction(async (tx) => {
  await tx.exec('UPDATE notes SET body = ? WHERE id = ?', ['edited', 1]);
  await tx.exec('DELETE FROM notes WHERE id = ?', [2]);
});

const unsubscribe = db.subscribe('notes', () => console.log('notes changed'));

await db.close();

API

  • openDatabase({ name, schema?, workerUrl? }): Promise<Database> — opens (or creates) an OPFS-backed SQLite database named ${name}.sqlite. schema runs once on open as plain SQL (CREATE TABLE IF NOT EXISTS recommended). workerUrl overrides the default worker location.
  • db.exec(sql, params?): Promise<{ changes, lastInsertRowid }> — runs a single statement, returns write metadata.
  • db.query(sql, params?): Promise<Row[]> — runs a read, returns row objects keyed by column name.
  • db.transaction(fn): Promise<T> — runs fn(tx) between BEGIN and COMMIT. Throw to roll back. tx has exec and query with the same signatures.
  • db.subscribe(table, cb): () => void — calls cb after any exec that targets table commits. Returns an unsubscribe. Table-level only — triggers and views are not tracked.
  • db.sync(): Promise<{push, pull}> — manually push pending writes and pull remote changes. Only available when opened with a sync option.
  • db.onSyncStatus(cb): () => void — subscribe to sync state transitions. cb is called synchronously once at registration and on every transition with { state: 'idle'|'syncing'|'error', pendingPush, lastSyncedAt, lastError }. Returns an unsubscribe.
  • db.export(): Promise<Blob> — dump the entire SQLite file as a Blob tagged application/vnd.sqlite3. Useful for user-initiated backups or "download my data" flows. See Backup and restore.
  • db.import(source): Promise<void> — replace the entire database with the contents of source (a Blob, ArrayBuffer, or Uint8Array). The handle stays usable: the schema is re-applied and sync is re-initialised if configured. Throws if a transaction is in flight.
  • db.close(): Promise<void> — flushes, terminates the worker, rejects pending calls, cancels any sync interval.

Sync option

sync: {
  url: 'http://nas:3001',        // libSQL HTTP endpoint
  authToken: '…',                 // optional; sent as Authorization: Bearer …
  interval: 15_000,               // optional; tick every N ms with exponential backoff on error
  syncOnMutation: true,           // optional; kick off a background sync after every successful exec/committed transaction
}

Errors thrown by db.sync() use the typed hierarchy SyncErrorAuthError (401/403) / NetworkError (fetch rejected) / ServerError (5xx, with .status). An AuthError during the interval pauses the loop; call db.sync() again after refreshing the token to resume.

With syncOnMutation: true, every successful exec and committed transaction kicks off a fire-and-forget sync. Rapid mutations while a sync is already running coalesce into a single trailing sync — you won't get one network round-trip per insert. The mutation promise does not wait for the sync; check onSyncStatus if you need to know when remote convergence has happened. Auto-sync is suppressed while AuthError has paused the scheduler — resume it by calling db.sync() with a fresh token. Combine with interval if you also want periodic pulls of remote changes (the auto-sync covers your local writes, the interval covers "someone else wrote on another device").

Backup and restore

db.export() hands you the whole database as a standard SQLite file, suitable for saving to disk, uploading to S3, or round-tripping through sqlite3 on the command line:

const blob = await db.export();
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'my-app.sqlite';
a.click();
URL.revokeObjectURL(url);

db.import(blob) replaces the current database with an uploaded file. The in-memory db handle stays valid — schema and sync setup are re-applied automatically — so the usual pattern is a single file input:

fileInput.addEventListener('change', async (e) => {
  const file = e.target.files[0];
  if (!file) return;
  await db.import(file);            // file is a Blob
  // subscribers fire for every user table; re-render your UI
});

Notes:

  • Import rejects if a transaction is in flight. Await any db.transaction(...) first.
  • When sync is configured, importing effectively "forks" the local clone. The imported file's _sync_meta state governs what gets pushed next; rows from a non-sync export won't be flagged for push. Expect the next db.sync() to behave as if the imported data had always been local.
  • The exported file contains every table, including the _sync_* internal tables when sync is configured.

Known limitations

  • One database per origin: the OPFS SAH Pool VFS is a shared singleton. Opening a second openDatabase while the first is still open throws NoModificationAllowedError.
  • Tables must have a rowid: WITHOUT ROWID and composite primary keys are not supported for sync in v1; the trigger installation uses NEW.rowid / OLD.rowid.
  • Clock skew between peers affects LWW conflict resolution — changed_at is the local client's clock.

Configuring a libSQL server for sync (M3 preview)

Sync is not wired up yet, but when it ships in M3 it will talk to any libSQL HTTP endpoint you control. The server needs two things: it must accept HTTP requests from libSQL clients, and it must return CORS headers the browser will accept.

Run sqld locally

sqld is the reference libSQL server. The fastest way to get one up is the official Docker image:

docker run --rm -it \
  -p 8080:8080 \
  -e SQLD_NODE=primary \
  -e SQLD_HTTP_LISTEN_ADDR=0.0.0.0:8080 \
  -v $PWD/sqld-data:/var/lib/sqld \
  ghcr.io/tursodatabase/libsql-server:latest

Your sync URL is then http://localhost:8080.

A minimal docker-compose.yml equivalent:

services:
  sqld:
    image: ghcr.io/tursodatabase/libsql-server:latest
    ports: ['8080:8080']
    environment:
      SQLD_NODE: primary
      SQLD_HTTP_LISTEN_ADDR: 0.0.0.0:8080
    volumes:
      - ./sqld-data:/var/lib/sqld

Auth tokens

For anything past local dev, set SQLD_AUTH_JWT_KEY (or -SQLD_AUTH_JWT_KEY_FILE) and pass the signed JWT as the authToken option to openDatabase. Without a key, sqld accepts unauthenticated requests — only OK for localhost.

CORS headers (required for browser clients)

A browser will refuse the fetch without these. sqld itself does not set them — put it behind a reverse proxy (Caddy, nginx, Cloudflare Worker) that adds:

Access-Control-Allow-Origin: https://your-app.example.com
Access-Control-Allow-Headers: authorization, content-type
Access-Control-Allow-Methods: POST, OPTIONS
Access-Control-Max-Age: 86400

And it must respond to the OPTIONS preflight with 204 + the same headers.

Caddy one-liner:

libsql.your-domain.com {
  @preflight method OPTIONS
  handle @preflight {
    header Access-Control-Allow-Origin "https://your-app.example.com"
    header Access-Control-Allow-Headers "authorization, content-type"
    header Access-Control-Allow-Methods "POST, OPTIONS"
    header Access-Control-Max-Age "86400"
    respond 204
  }
  reverse_proxy localhost:8080 {
    header_down Access-Control-Allow-Origin "https://your-app.example.com"
    header_down Access-Control-Allow-Credentials "true"
  }
}

Use the exact origin of your app — * is rejected by browsers when credentials are attached.

Wiring into openDatabase

const db = await openDatabase({
  name: 'my-app',
  schema: `CREATE TABLE IF NOT EXISTS notes (...)`,
  sync: {
    url: 'https://libsql.your-domain.com',
    authToken: localStorage.getItem('syncToken'),
    interval: 60_000,
  },
});

If interval is set, the client pulls + pushes on that cadence; otherwise call db.sync() manually. Conflict policy is last-write-wins keyed on changed_at.

Recipes

Things that are trivial in user-land and therefore deliberately not part of the core API.

Export as JSON

db.export() returns a binary SQLite file — the universal, lossless format. When you need a JSON dump instead (product requirement to "download as JSON", uploading to a JSON-native endpoint, diffable snapshots in tests), iterate sqlite_master yourself:

async function exportJson(db) {
  const tables = await db.query(
    `SELECT name FROM sqlite_master
       WHERE type = 'table'
         AND name NOT LIKE 'sqlite\\_%' ESCAPE '\\'
         AND name NOT LIKE '\\_sync\\_%' ESCAPE '\\'`,
  );
  const dump = { version: 1, exportedAt: new Date().toISOString(), tables: {} };
  for (const { name } of tables) {
    dump.tables[name] = await db.query(`SELECT * FROM "${name}"`);
  }
  return new Blob([JSON.stringify(dump, null, 2)], { type: 'application/json' });
}

Caveats you own when you do this:

  • BLOB columns need base64 (btoa(String.fromCharCode(...row.col))) or they'll JSON-stringify to {}.
  • Integers past Number.MAX_SAFE_INTEGER lose precision — cast to TEXT in the query (CAST(big_id AS TEXT) AS big_id) and parse client-side.
  • Schema is not preserved. Types, constraints, indexes, triggers, and views are gone. Re-importing requires you to recreate the schema separately.
  • Not round-trip-symmetric with db.import(). Use db.export() for true backup/restore; use JSON for "send a copy of my data" and similar one-way flows.

Restore from JSON

The mirror of above — rebuild a database from a JSON dump:

async function importJson(db, dump) {
  await db.transaction(async (tx) => {
    for (const [table, rows] of Object.entries(dump.tables)) {
      if (!rows.length) continue;
      const cols = Object.keys(rows[0]);
      const placeholders = cols.map(() => '?').join(', ');
      const colList = cols.map((c) => `"${c}"`).join(', ');
      for (const row of rows) {
        await tx.exec(
          `INSERT OR REPLACE INTO "${table}" (${colList}) VALUES (${placeholders})`,
          cols.map((c) => row[c]),
        );
      }
    }
  });
}

Assumes the tables already exist (run your schema first, or pass it when opening the DB).

Running the notes demo

From the repository root:

npm install
npx --yes serve -p 8080 .
# open http://localhost:8080/examples/notes/

Tests

npm test

Covers the pure-JS pieces (RPC layer, SQL parser, event notifier). Playwright-driven e2e against real browsers is on the M1 follow-up list.

Roadmap

See PLAN.md for the full plan.

  • M1 (done): local-only MVP — open, exec, query, transactions, subscribe.
  • M2 (done): table-level change notifications wired through exec and transaction commit; verified in unit tests and via Playwright against the notes demo.
  • M3 (done): libSQL HTTP sync with _sync_meta capture triggers, last-write-wins via _sync_log on the remote, onSyncStatus callback, interval + exponential backoff, typed error taxonomy. Convergence verified against a real sqld.
  • M4 (in progress): single-file bundle landed. Remaining: db.export()/db.import(), db.storageEstimate(), iOS fallback messaging.

License

MIT