npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@asaidimu/utils-database

v1.1.1

Published

A collection of database utilities.

Readme

@asaidimu/utils-database

npm version License: MIT Build Status

A flexible, schema-driven document database layer for browser and Node.js with built‑in validation, migrations, transactions, and telemetry.


Table of Contents


Overview & Features

@asaidimu/utils-database provides a high‑level, storage‑agnostic document database layer that works seamlessly in both browser (IndexedDB) and Node.js (memory) environments. It combines schema validation, optimistic concurrency control, and a flexible migration system to help you manage evolving data models without losing your mind.

The library was built to solve common pain points when working with client‑side storage: schema drift, concurrent edit conflicts, lack of transactions, and poor observability. By wrapping low‑level stores with a unified Collection / Document API, you get the same developer experience whether you are persisting data locally or just mocking storage for tests.

Key Features

  • Schema‑driven validation – Define collections using SchemaDefinition (compatible with Standard Schema) and automatically validate every write.
  • Multi‑backend support – Use IndexedDB for persistent browser storage or an ephemeral in‑memory store for testing.
  • Optimistic Concurrency Control (OCC) – Every document has a $version field; updates fail with CONFLICT if the version has changed.
  • Atomic transactions – Group multiple operations across collections and commit them together.
  • Streaming migrations – Transform data in a memory‑efficient way using ReadableStream and batched writes.
  • Event system – Subscribe to database, collection, or document level events (create, update, delete, read).
  • Built‑in telemetry – Track operation duration, errors, and metadata (optional).
  • Middleware pipeline – Easily add retry logic, logging, or custom behaviour around any operation.
  • Pagination helpers – Offset and cursor‑based pagination for lists.

Installation & Setup

Prerequisites

  • Node.js 18+ or a modern browser (for IndexedDB)
  • TypeScript 4.5+ (optional, but typings are included)

Install the package

npm install @asaidimu/utils-database

This package has the following peer dependencies – you need to install them manually:

npm install @asaidimu/anansi @asaidimu/events

Basic Configuration

The library exports two factory functions for creating a database:

  • createIndexedDbStore – persistent storage using the browser’s IndexedDB.
  • createEphemeralStore – in‑memory storage (useful for tests).

Example setup with IndexedDB:

import { DatabaseConnection } from "@asaidimu/utils-database";
import { createIndexedDbStore } from "@asaidimu/utils-database";

const db = await DatabaseConnection(
  {
    name: "my-app-db",
    validate: true,
    enableTelemetry: true,
    predicates: {}, // custom validation predicates (optional)
  },
  createIndexedDbStore
);

To verify that everything works, try creating a simple collection:

const userSchema = {
  name: "users",
  version: "1.0.0",
  fields: {
    id: { name: "id", type: "string", required: true, unique: true },
    email: { name: "email", type: "string", required: true },
  },
  nestedSchemas: {},
};

const users = await db.createCollection(userSchema);
console.log("Collection ready");

Usage Documentation

Basic CRUD

Create, read, update, and delete documents with an easy‑to‑use proxy API.

const users = await db.collection<{ id: string; email: string }>("users");

// Create
const doc = await users.create({ id: "user-1", email: "[email protected]" });

// Read a field directly
console.log(doc.email); // "[email protected]"

// Update
await doc.update({ email: "[email protected]" });

// Read latest data from store
await doc.read();

// Delete
await doc.delete();

Documents automatically receive system properties: $id (UUID v7), $created, $updated, and $version.

Transactions

Group multiple operations across different collections into an atomic unit.

await db.transaction(async (tx) => {
  const user = await users.create({ id: "u2", email: "[email protected]" }, tx);
  const profile = await profiles.create({ userId: "u2", bio: "..." }, tx);
  // both saves happen together
});

If any operation throws, the entire transaction is rolled back.

Migrations

When you change a collection’s schema, you can migrate existing documents using a streaming, backpressure‑aware process.

await db.migrateCollection(
  "users",
  {
    changes: [
      {
        type: "addField",
        id: "isActive",
        definition: { name: "isActive", type: "boolean" },
      },
    ],
    transform: {
      forward: (doc) => ({ ...doc, isActive: true }),
      backward: (doc) => {
        const { isActive, ...rest } = doc;
        return rest;
      },
    },
    description: "Add active flag",
  },
  100 // batch size (optional)
);

The migration runs inside a single transaction, updating both the documents and the stored schema definition.

Events & Telemetry

Subscribe to database, collection, or document events.

// Database‑level
const unsubDb = await db.subscribe("collection:create", (event) => {
  console.log("New collection", event.schema.name);
});

// Collection‑level
const unsubCol = await users.subscribe("document:create", (event) => {
  console.log("Document created", event.data);
});

// Document‑level
const unsubDoc = doc.subscribe("document:update", (event) => {
  console.log("Updated at", event.timestamp);
});

// Telemetry (when enableTelemetry: true)
db.subscribe("telemetry", (event) => {
  console.log(`${event.method} took ${event.metadata.performance.durationMs}ms`);
});

Pagination & Filtering

Query documents using the @asaidimu/query filter syntax.

// Filter by field
const activeUsers = await users.filter({
  field: "isActive",
  operator: "eq",
  value: true,
});

// Find a single document
const bob = await users.find({
  field: "email",
  operator: "eq",
  value: "[email protected]",
});

// Offset pagination
const iterator = await users.list({
  type: "offset",
  offset: 0,
  limit: 10,
});
const firstPage = await iterator.next();

// Cursor pagination (more efficient for large collections)
const cursorIter = await users.list({
  type: "cursor",
  limit: 10,
  direction: "forward",
});

Project Architecture

Core Components

  • DatabaseConnection – The main entry point. It creates a metadata store for schemas, sets up the event bus, and returns a Database object.
  • Store interface – Low‑level storage abstraction. Implementations exist for IndexedDB and memory. All store operations work on clones to avoid mutation.
  • Collection – A logical grouping of documents. It handles validation, indexing, and querying, and wraps each document in a Document proxy.
  • Document – Proxy that exposes both data fields and methods (update, delete, read, subscribe). Implements OCC via version checks.
  • Pipeline & Middleware – Intercepts every operation (collection or document) to add retries, telemetry, or custom logic.
  • MigrationEngine – Computes schema transformations and produces a stream of migrated documents.
  • TransactionContext – Buffers operations and commits them atomically using store‑level batching.

Data Flow

  1. A user calls collection.create() → the request goes through the middleware pipeline.
  2. The createDocument factory validates the input against the schema.
  3. A new document is added to the store and indexed by IndexManager.
  4. Events are emitted (document:create, collection:read).
  5. All write operations perform OCC: they fetch the latest version, compare, and increment.

For migrations: migrateCollection streams documents from the store, passes them through the transformation engine, and writes them back in batches – all without loading the whole collection into memory.

Extension Points

  • Custom stores – Implement the Store<T> interface and pass your own factory to DatabaseConnection.
  • Custom predicates – Provide extra validation rules via the predicates config.
  • Middleware – Add global or per‑collection middleware to the pipeline (e.g., logging, metrics, custom error handling).
  • Event listeners – React to any internal event to build audit logs, synchronisation, or analytics.

Development & Contributing

Development Setup

  1. Clone the repository:
    git clone https://github.com/asaidimu/erp-utils.git
    cd erp-utils/src/database
  2. Install dependencies:
    npm install
  3. Build the package:
    npm run build   # if defined, otherwise use tsc

Available Scripts

| Command | Description | | ----------------- | -------------------------------------------------- | | npm test | Run tests once (Vitest) | | npm run test:watch | Run tests in watch mode | | npm run test:browser | Run tests in a real browser (Vitest browser mode) |

Testing

The library uses Vitest with fake-indexeddb to simulate IndexedDB in Node. All store implementations must pass the same conformance test suite (testStoreImplementation). When adding a new feature, please include unit tests and ensure the existing tests pass.

Contributing Guidelines

  • Fork the repository and create a feature branch.
  • Follow the existing code style (Prettier, ESLint).
  • Write clear commit messages following Conventional Commits.
  • Open a Pull Request with a description of the changes and why they are needed.
  • Ensure all tests pass and coverage does not decrease.

Issue Reporting

Use the GitHub issue tracker to report bugs or request features. Please include:

  • A minimal code reproduction (if possible).
  • Expected vs. actual behaviour.
  • Environment (Node version, browser, package versions).

Additional Information

Troubleshooting

| Error | Likely cause & solution | | --------------------------------- | --------------------------------------------------------- | | SCHEMA_NOT_FOUND | The collection was not created. Call createCollection first. | | CONFLICT | Another operation updated the document. Re‑fetch and retry. | | TRANSACTION_FAILED | One of the batched writes failed. Check individual operations. | | INVALID_DATA | The document does not match the schema. Review validation errors. |

FAQ

Q: Can I use this in a Node.js backend?
A: Yes, with the createEphemeralStore (in‑memory) or a future persistent backend (e.g., SQLite). The current IndexedDB store only works in browsers.

Q: How do migrations work with IndexedDB?
A: Migrations only transform data and update the schema metadata. Structural changes (e.g., creating new indexes) still require a database version upgrade (onupgradeneeded). The library does not automate that part yet.

Q: Does telemetry impact performance?
A: Telemetry adds minimal overhead – a few timestamp reads and an event emission. You can disable it with enableTelemetry: false.

Q: Can I use my own ID generator instead of UUID v7?
A: Yes – provide a document with your own $id field. The library will not overwrite it.

Changelog

See the CHANGELOG.md for version history.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Acknowledgments

Built with ❤️ using: