npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

backless-core

v0.4.1

Published

Core library for local-first SQLite sync via cloud storage.

Readme

backless-core

Core library for local-first SQLite sync via cloud storage.

Installation

npm install backless-core backless-google-drive   # or backless-onedrive

Usage

1. Define your schema

import type { DatabaseSchema, DB } from "backless-core";

export const mySchema: DatabaseSchema = {
  version: 1,
  syncedTables: new Set(["todos"]),

  async create(db: DB) {
    await db.exec(`
      CREATE TABLE todos (
        id TEXT PRIMARY KEY NOT NULL DEFAULT (uuid()),
        title TEXT NOT NULL DEFAULT '',
        done INTEGER NOT NULL DEFAULT 0
      )
    `);
  },

  async migrate(db: DB, from: number, to: number) {
    if (from < 2 && to >= 2) {
      await db.exec("ALTER TABLE todos ADD COLUMN priority TEXT NOT NULL DEFAULT 'normal'");
    }
  },

  async clear(db: DB) {
    // Do NOT call db.exec("BEGIN") here — clear() runs inside a transaction
    // started by backless.clearAllData(). Nesting BEGIN will throw.
    await db.exec("DELETE FROM todos");
    await db.exec("DELETE FROM todos__crsql_clock");
  },
};

syncedTables controls which tables get CRDT sync enabled. Only tables listed here are synced across devices — all others (cache tables, local UI state, etc.) are left as plain SQLite. Backless.init() calls crsql_as_crr on each declared table at startup. If a listed table doesn't exist or lacks a non-nullable primary key, startup throws immediately with a clear error.

2. Initialize Backless

Vite required for WASMbackless-core loads cr-sqlite via a ?url import (import wasmUrl from "...crsqlite.wasm?url"). This is a Vite-specific feature. If you use a different bundler (webpack, esbuild, Rollup), you need to copy the .wasm file to your public directory and provide its URL manually via the wasmUrl option in Backless.init.

import { Backless, VersionMismatchBehavior } from "backless-core";
import { GoogleDriveApi } from "backless-google-drive";
import { mySchema } from "./schema.js";

const backless = await Backless.init({
  schema: mySchema,
  databaseName: "myapp.db",
  versionMismatch: VersionMismatchBehavior.APPLY_COMPATIBLE,
  appFolderName: "MyApp",
});

const db = backless.database;

3. Read and write

await db.exec(
  "INSERT INTO todos (id, title) VALUES (?, ?)",
  [crypto.randomUUID(), "Buy milk"]
);

const todos = await db.execO<{ id: string; title: string; done: number }>(
  "SELECT id, title, done FROM todos ORDER BY rowid"
);

4. Sync

const drive = new GoogleDriveApi(async () => myGetAccessToken());

const result = await backless.sync(drive);
console.log(`Pulled ${result.pulled}, pushed ${result.pushed}`);
// result: { pulled, pushed, mediaUploaded, mediaDeleted, warnings, errors }

5. Sign out / session management

Manual sign-out — wipe all local data so the next sign-in starts fresh:

await backless.clearAllData(); // clears user tables, CRDT clocks, cursors, account info

Expired session — the user's cloud token expired but they haven't signed out. No backless call needed — just re-authenticate with your OAuth flow to get a fresh token. The database, cursors, and sync state are all still valid. Call sync() once you have a new token.

How sync works

Initialisation

Backless.init() does the following before returning:

  1. Opens (or creates) the SQLite database file.
  2. Creates two internal tables if they don't exist: _sync_meta and _sync_cursors.
  3. Generates and persists a device_id (a UUID) if this is a new database. The device ID is stable for the lifetime of the database — it identifies this device's changeset folder in the cloud.
  4. Runs schema.create() on a fresh database, or schema.migrate() if the stored schema_version is older than the current one.

The device ID and schema version are stored as rows in _sync_meta. They survive clearAllData() (so the device remains consistently identified after sign-out) but are permanently removed if the database file itself is deleted.

Cloud folder structure

All data lives under a single app folder in the user's cloud storage:

<appFolderName>/
└── changesets/
    ├── device-<uuid-A>/
    │   ├── snapshot-00100.json   ← full-state snapshot at sequence 100
    │   ├── cs-00101.json
    │   └── cs-00102.json
    └── device-<uuid-B>/
        └── cs-00001.json

Each device writes only to its own folder. Other devices never write to each other's folders, so there are no cloud-level write conflicts.

Changeset files are named cs-NNNNN.json where NNNNN is a zero-padded sequence number that starts at 1 and increments with every push. Snapshot files (snapshot-NNNNN.json) are written automatically — see Snapshot compaction.

A sync cycle: pull → push

sync() always pulls first, then pushes.

Pull — receive remote changes:

  1. List all device folders under changesets/.
  2. For each remote device (including own device if cursor is 0 — see Bootstrap recovery below):
    • Read the stored cursor from _sync_cursors (the sequence number last successfully applied from that device).
    • List files in the remote folder; download every cs-NNNNN.json where NNNNN > cursor.
    • Apply the changes to the local SQLite database via crsql_changes.
    • Advance the cursor in _sync_cursors after each file.
  3. On error for a specific device, record it in result.errors and continue with remaining devices.

Push — send local changes:

  1. Query crsql_changes for all rows with db_version > last_push_db_version (stored in _sync_meta).
  2. If there are no new changes, push is skipped entirely.
  3. Upload a new cs-NNNNN.json to changesets/device-<this-device>/.
  4. Update last_push_db_version to the current database version.
  5. Optionally create a snapshot and clean up old covered changeset files.

Conflict resolution

Backless uses cr-sqlite for CRDT semantics. Every CRDT-enabled table tracks a vector clock (a __crsql_clock table) that records the col_version (logical timestamp) for each (row, column) pair across all devices.

When two devices modify the same cell concurrently, cr-sqlite applies last-writer-wins per column using the col_version. The device with the higher col_version wins. This means:

  • Column-level merging: updating title on device A and date on device B concurrently produces a row with A's title and B's date.
  • Deletes are represented as tombstones (sentinel cid = "-1") and also participate in LWW resolution.
  • No manual conflict resolution is required.

Internal state

Backless stores all sync state in two tables that are created alongside your app tables:

_sync_meta — key/value store for this device's own state:

| Key | Value | |---|---| | device_id | Stable UUID for this device | | schema_version | The schema version currently applied | | last_push_db_version | The db_version at the time of the last push | | last_snapshot_sequence | Sequence number of the most recently created snapshot | | active_provider | Cloud storage provider in use ("google" / "microsoft") | | account_email | Email of the signed-in account |

_sync_cursors — one row per remote device that has been synced from:

| Column | Meaning | |---|---| | device_id | Remote device's UUID | | last_sequence | Last successfully applied changeset sequence from that device | | last_sync_at | ISO timestamp of the last sync from that device |

Bootstrap recovery

When a device calls clearAllData() (e.g. on sign-out) and then signs back in, its _sync_cursors table is wiped — every remote cursor resets to 0, including its own. On the next sync():

  1. pull() notices that the own-device cursor is 0 and includes this device's own folder in the pull pass.
  2. The device re-downloads its own changeset files (or snapshot) and reapplies them, fully restoring the database.
  3. sync() detects this bootstrap and advances last_push_db_version to the current DB version, preventing a redundant re-upload of data that is already in the cloud.

This means after sign-out and re-sign-in a full sync automatically restores all data without any special app-level handling.

Account management

// Store the signed-in account (survives page reloads)
await backless.setActiveAccount("google", "[email protected]");

// Restore session on page load
const account = await backless.getActiveAccount();
if (account) {
  // { provider: "google", email: "[email protected]" }
}

clearAllData() removes the stored account along with all user data and sync state, so getActiveAccount() returns null after a sign-out.

Snapshot compaction

Backless automatically reduces cloud file accumulation over time. Every 100 pushed changesets a full-state snapshot (snapshot-NNNNN.json) is written to the device's cloud folder. After 200 more pushes the changeset files covered by that snapshot are deleted.

What this means for new devices

A device joining for the first time (cursor = 0) will download the latest snapshot instead of replaying every individual changeset file. This keeps initial sync fast regardless of history length.

What this means for existing devices

A device that hasn't synced in a while may find that changeset files it needs have already been deleted. When this happens sync() / pull() will add a warning to SyncResult.warnings for the affected remote device and skip it for that sync cycle.

const result = await backless.sync(drive);

for (const w of result.warnings) {
  if (w.message.includes("clearAllData")) {
    // This device has fallen too far behind a compacted remote.
    // Wipe local data and re-sync from the remote's latest snapshot.
    await backless.clearAllData();
    await backless.sync(drive);
    break;
  }
}

Constants

| Constant | Value | Meaning | |---|---|---| | SNAPSHOT_THRESHOLD | 100 | A snapshot is created every N pushed changesets | | SNAPSHOT_GRACE | 200 | Covered cs files are deleted this many pushes after the snapshot |

These are not currently configurable. The constants are exported from backless-core if you need to reference them.

Media sync

Backless can sync binary files (images, attachments, documents) alongside your CRDT changesets. Media is handled separately from the database: files are stored in a dedicated media/ subfolder in the cloud and loaded on demand — never eagerly pulled during sync.

Storage: Browser Cache API

Media blobs are stored locally using the browser's Cache API under the cache name "backless-media". Each file is keyed by its SHA-256 hash (https://backless-media/<hash>). This means:

  • Persistent across page reloads — cached blobs survive navigation and browser restarts (until explicitly cleared or the browser evicts them under storage pressure).
  • No IndexedDB or custom storage needed — the browser manages it natively.
  • Deduplicated by content — two attachments with identical bytes share a single cache entry.

Upload flow

When the user attaches a file:

  1. Hash it with hashFile(file) to get its SHA-256 content hash.
  2. Store it locally with cacheMedia(hash, blob).
  3. Record the attachment in your database (hash, filename, MIME type, etc.).

During sync(), backless calls mediaSchema.getUnuploadedMedia() to find attachments not yet in the cloud, then uploads them to <app-folder>/media/<shard>/ in the background. Files are sharded by the first two hex characters of their hash to keep cloud folder sizes manageable.

If a file is already present in the cloud (detected by filename), it is marked uploaded without re-uploading. If the local cache was cleared (e.g. after sign-out) before the upload completed, the item is skipped with a warning and will be retried on the next sync once the cache is populated again.

On-demand loading

Media is never downloaded during sync. When your UI needs to display an attachment, call:

const blob = await backless.resolveMedia(hash, filename, drive);
const url = URL.createObjectURL(blob);
img.src = url;

resolveMedia follows a cache-first strategy:

  1. Check the local Cache API — return immediately on hit.
  2. On miss, download from the cloud, store in the cache, then return.

Subsequent loads for the same hash are instant (cache hit), even across page reloads.

Sign-out and cache clearing

On explicit sign-out, clear both the database and the media cache:

await backless.clearAllData();   // wipes SQLite data + sync state
await backless.clearMediaCache(); // deletes the "backless-media" Cache API store

On the next sign-in, resolveMedia transparently re-downloads files from the cloud as they are needed. The CRDT sync restores the database; the media cache is rebuilt lazily on demand.

Session expiry (no sign-out) — if only the auth token expires, do not clear the media cache. Local data and cached blobs are still valid; just re-authenticate and call sync().

Garbage collection

sync() automatically removes cloud media files whose hashes no longer appear in your mediaSchema.getAllMediaHashes() result — for example after an attachment is deleted. It also prunes the local Cache API of any entries not referenced by the current database state.

Implementing MediaSchema

import type { DatabaseSchema, MediaSchema, MediaItem, DB } from "backless-core";
import { getCachedMedia } from "backless-core";

function createMediaSchema(getDb: () => DB): MediaSchema {
  return {
    // Return attachments not yet uploaded to cloud
    async getUnuploadedMedia(): Promise<MediaItem[]> {
      const rows = await getDb().execO<{
        hash: string; original_filename: string; mime_type: string;
      }>(
        `SELECT a.hash, a.original_filename, a.mime_type
         FROM attachments a
         LEFT JOIN _media_status m ON a.hash = m.hash
         WHERE m.uploaded IS NULL OR m.uploaded = 0`
      );
      return rows.map(r => ({
        hash: r.hash,
        filename: r.original_filename,
        mimeType: r.mime_type,
        data: async () => {
          const blob = await getCachedMedia(r.hash);
          if (!blob) throw new Error(`Media ${r.hash} not in cache`);
          return blob;
        },
      }));
    },

    // Return hashes of all attachments currently in the database
    async getAllMediaHashes(): Promise<Set<string>> {
      const rows = await getDb().execO<{ hash: string }>(
        "SELECT DISTINCT hash FROM attachments"
      );
      return new Set(rows.map(r => r.hash));
    },

    // Called by backless after a file is successfully uploaded
    async markAsUploaded(hash: string): Promise<void> {
      await getDb().exec(
        "INSERT OR REPLACE INTO _media_status (hash, uploaded) VALUES (?, 1)",
        [hash]
      );
    },
  };
}

You also need a _media_status tracking table in your DatabaseSchema.create:

await db.exec(`
  CREATE TABLE IF NOT EXISTS _media_status (
    hash TEXT PRIMARY KEY,
    local_path TEXT,
    uploaded INTEGER DEFAULT 0
  )
`);
// Not listed in syncedTables — Backless leaves it as a plain local-only table

Attaching a file (full example)

import { hashFile, cacheMedia } from "backless-core";

const file = fileInputElement.files[0];
const hash = await hashFile(file);

// 1. Cache locally so the upload step and UI can access it
await cacheMedia(hash, file);

// 2. Record in database (CRDT-synced via changeset)
await db.exec(
  "INSERT INTO attachments (id, event_id, hash, original_filename, mime_type, size) VALUES (?, ?, ?, ?, ?, ?)",
  [crypto.randomUUID(), eventId, hash, file.name, file.type, file.size]
);

// Next sync() call will upload the file to cloud automatically

API

Backless.init(config)

| Option | Type | Default | Description | |---|---|---|---| | schema | DatabaseSchema | required | Your app's schema | | databaseName | string | "backless.db" | SQLite database filename | | versionMismatch | VersionMismatchBehavior | ABORT | How to handle schema version mismatches during sync | | mediaSchema | MediaSchema \| null | null | Optional media sync support | | appFolderName | string | "Backless" | Cloud storage folder name (must match provider) |

VersionMismatchBehavior

| Value | Behaviour | |---|---| | ABORT | Stop sync if remote schema is newer, warn user to update | | SKIP_DEVICE_UNTIL_NEXT_SYNC | Skip that device this sync cycle, try again next time | | DROP_CHANGESETS | Discard newer changesets, advance cursor | | APPLY_COMPATIBLE | Apply only columns that exist locally, drop unknown ones |

DatabaseSchema interface

interface DatabaseSchema {
  readonly version: number;
  readonly syncedTables: ReadonlySet<string>;  // tables to enable CRDT sync on
  create(db: DB): Promise<void>;
  migrate(db: DB, oldVersion: number, newVersion: number): Promise<void>;
  clear(db: DB): Promise<void>;  // called by backless.clearAllData()
}

Media sync (optional)

Implement MediaSchema to sync binary files alongside your changesets:

interface MediaSchema {
  getUnuploadedMedia(): Promise<MediaItem[]>;
  getAllMediaHashes(): Promise<Set<string>>;
  markAsUploaded(hash: string): Promise<void>;
}

Pass it to Backless.init({ mediaSchema: myMediaSchema }).

Utilities for media handling:

  • hashFile(blob) — compute SHA-256 hash for a blob
  • cacheMedia(hash, blob) — store a blob in the local cache (call after the user picks a file)
  • getCachedMedia(hash) — retrieve a cached blob by hash (use in MediaItem.data callbacks)
  • backless.resolveMedia(hash, filename, drive) — cache-first resolution with cloud fallback

Troubleshooting

Table not syncing / "could not find the schema information for table X"

If a table's changes are silently dropped or you see a cr-sqlite error about a missing table, the table is probably not listed in syncedTables.

// ❌ attachments rows are never synced — not in syncedTables
export const mySchema: DatabaseSchema = {
  version: 1,
  syncedTables: new Set(["events"]),   // missing "attachments"
  async create(db) {
    await db.exec("CREATE TABLE events (...)");
    await db.exec("CREATE TABLE attachments (...)");
  },
  ...
};

// ✅ correct
  syncedTables: new Set(["events", "attachments"]),

Table name typo in syncedTables

If a table listed in syncedTables doesn't exist after create() / migrate() runs, Backless.init() throws immediately:

Error: Failed to enable CRDT for table 'evnets'. Make sure the table is created
in schema.create() / schema.migrate() and has a non-nullable primary key.

Fix the typo or ensure create() creates the table before init returns.

schema.clear() must not start its own transaction

backless.clearAllData() wraps the entire clear operation — including the call to schema.clear(db) — in a single BEGIN/COMMIT transaction. If your clear implementation issues its own BEGIN, SQLite will throw because nested transactions are not supported via BEGIN (only savepoints are).

Wrong:

async clear(db: DB) {
  await db.exec("BEGIN");          // ❌ throws — already inside a transaction
  await db.exec("DELETE FROM todos");
  await db.exec("COMMIT");
},

Correct:

async clear(db: DB) {
  await db.exec("DELETE FROM todos");          // ✅ runs inside the outer transaction
  await db.exec("DELETE FROM todos__crsql_clock");
},

WASM loading only works with Vite

backless-core imports the cr-sqlite WASM binary using the Vite ?url suffix:

import wasmUrl from "@vlcn.io/crsqlite-wasm/crsqlite.wasm?url";

This is a Vite-specific feature — other bundlers (webpack, esbuild standalone, Rollup without the correct plugin) do not understand ?url imports and will fail at build time.

If you are not using Vite, copy node_modules/@vlcn.io/crsqlite-wasm/crsqlite.wasm to your public/static directory and pass its URL explicitly:

const backless = await Backless.init({
  schema: mySchema,
  wasmUrl: "/static/crsqlite.wasm",  // served as a static asset
});