npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@kaiko.io/rescript-reindexed

v8.2.1

Published

Kind of an IndexedDB ORM written in ReScript with no runtime dependencies.

Downloads

1,435

Readme

ReIndexed

A type-safe IndexedDB ORM for ReScript with zero runtime dependencies. ReIndexed provides an elegant, functional API for working with IndexedDB, complete with migrations, transactions, and high-performance batch operations.

Features

  • 🎯 Type-safe: Full type safety with ReScript's type system
  • 🚀 High Performance: Batch operations provide 20-60× speedup for bulk writes
  • 📦 Zero Dependencies: No runtime dependencies
  • 🔄 Migrations: Versioned schema migrations with automatic upgrades
  • 🛡️ Error Handling: Both unsafe (exception-throwing) and safe (Result-based) APIs
  • 🔍 Rich Queries: Complex queries with And/Or, pagination, and cursor-based operations
  • Transactions: Automatic transaction management with full control when needed

Installation

npm install @kaiko.io/rescript-reindexed

Add to your rescript.json:

{
  "bs-dependencies": ["@kaiko.io/rescript-reindexed"]
}

Quick Start

// 1. Define your data model
module Vessel = {
  module Def = {
    type t = {
      id: string,
      name: string,
      age: int,
      flag: option<string>
    }
    type index = [#id | #name | #age | #flag]
  }
  include ReIndexed.MakeModel(Def)
}

// 2. Define your database with migrations
module Database = ReIndexed.MakeDatabase({
  let migrations = () => [
    // Migration 0: Create object store
    _ => async (db, _transaction) => {
      let vessels = db->IDB.Migration.Database.createObjectStore("vessels")
      vessels->IDB.Migration.Store.createIndex("name", "name")
      vessels->IDB.Migration.Store.createIndex("age", "age")
      Ok()
    }
  ]
})

// 3. Define your query interface
module Query = Database.MakeQuery({
  type read = {vessels: Vessel.read}
  type write = {vessels: Vessel.actions}
  type response = {vessels: array<Vessel.t>}
  type mapper = {vessels?: Vessel.t => ReIndexedCommands.command<Vessel.t>}
  type aggregator<'state> = {
    vessels?: ('state, Vessel.t) => ('state, ReIndexedCommands.flow)
  }
})

// 4. Connect and use
let main = async () => {
  // Connect to database
  switch await Database.connect("my-database") {
  | Error(e) => Console.error("Failed to connect", e)
  | Ok(_db) => {
      // Write data
      let _ = await {
        ...Query.makeWrite(),
        vessels: [
          Vessel.save({id: "v1", name: "Aurora", age: 5, flag: Some("us")}),
          Vessel.save({id: "v2", name: "Borealis", age: 10, flag: Some("ca")}),
        ]
      }->Query.write

      // Read data
      let {vessels} = await {
        ...Query.makeRead(),
        vessels: Vessel.All
      }->Query.read

      Console.log2("Vessels:", vessels)
    }
  }
}

Core Concepts

Models

Models define your data structures and provide type-safe operations. ReIndexed provides two model makers:

  • MakeModel: For models with string IDs
  • MakeIdModel: For models with custom ID types
// Simple model with string ID
module Staff = {
  module Def = {
    type t = {
      id: string,
      name: string,
      age: int,
      position: [#shore | #crew]
    }
    type index = [#id | #name | #age | #position]
  }
  include ReIndexed.MakeModel(Def)
}

// Model with custom ID type
module VesselId: ReIndexed.Identifier = {
  type t
  external fromString: string => t = "%identity"
  external toString: t => string = "%identity"
  external manyFromString: array<string> => array<t> = "%identity"
  external manyToString: array<t> => array<string> = "%identity"
}

module Vessel = {
  module Def = {
    type t = {id: VesselId.t, name: string, age: int}
    type index = [#id | #name | #age]
  }
  include ReIndexed.MakeIdModel(Def, VesselId)
}

Database and Migrations

Databases are created with versioned migrations. Each migration receives the database and transaction:

module Database = ReIndexed.MakeDatabase({
  let migrations = () => [
    // Migration 0: Create initial schema
    _ => async (db, _transaction) => {
      let vessels = db->IDB.Migration.Database.createObjectStore("vessels")
      vessels->IDB.Migration.Store.createIndex("name", "name")
      vessels->IDB.Migration.Store.createIndex("age", "age")
      Ok()
    },

    // Migration 1: Seed initial data
    _ => async (_db, transaction) => {
      // Use ReIndexedPatterns.MakeWriter or custom logic
      Ok()
    },

    // Migration 2: Add new index
    _ => async (_db, transaction) => {
      let vessels = transaction->IDB.Migration.Transaction.objectStore("vessels")
      vessels->IDB.Migration.Store.createIndex("flag", "flag")
      Ok()
    }
  ]
})

// Connect to database
let result = await Database.connect("my-app-db")

Query Interface

The query interface is defined for each database and provides type-safe access:

module Query = Database.MakeQuery({
  // Read specification - what you can query
  type read = {
    vessels: Vessel.read,
    staff: Staff.read
  }

  // Write specification - what you can modify
  type write = {
    vessels: Vessel.actions,
    staff: Staff.actions
  }

  // Response type - what you get back
  type response = {
    vessels: array<Vessel.t>,
    staff: array<Staff.t>
  }

  // Mapper for transformations
  type mapper = {
    vessels?: Vessel.t => ReIndexedCommands.command<Vessel.t>,
    staff?: Staff.t => ReIndexedCommands.command<Staff.t>
  }

  // Aggregator for reductions
  type aggregator<'state> = {
    vessels?: ('state, Vessel.t) => ('state, ReIndexedCommands.flow),
    staff?: ('state, Staff.t) => ('state, ReIndexedCommands.flow)
  }
})

Query Operations

Read Operations

Read data from one or more object stores:

// Read all vessels
let {vessels} = await {
  ...Query.makeRead(),
  vessels: All
}->Query.read

// Read by ID
let {vessels} = await {
  ...Query.makeRead(),
  vessels: Get("vessel-123")
}->Query.read

// Read with complex query
let {vessels} = await {
  ...Query.makeRead(),
  vessels: And(
    Gte(#age, "10"),
    Lt(#age, "20")
  )
}->Query.read

// Read from multiple stores
let {vessels, staff} = await {
  ...Query.makeRead(),
  vessels: All,
  staff: Is(#position, "crew")
}->Query.read

Write Operations

Write data to one or more object stores:

// Save records
let _ = await {
  ...Query.makeWrite(),
  vessels: [
    Vessel.save({id: "v1", name: "Aurora", age: 5, flag: None}),
    Vessel.save({id: "v2", name: "Borealis", age: 10, flag: Some("ca")})
  ]
}->Query.write

// Delete records
let _ = await {
  ...Query.makeWrite(),
  vessels: [
    Vessel.Delete("v1"),
    Vessel.Delete("v2")
  ]
}->Query.write

// Clear entire store
let _ = await {
  ...Query.makeWrite(),
  vessels: [Vessel.Clear]
}->Query.write

// Mix operations
let _ = await {
  ...Query.makeWrite(),
  vessels: [
    Vessel.Clear,
    Vessel.save(vessel1),
    Vessel.save(vessel2)
  ]
}->Query.write

Do - Combined Read/Write Operations

Execute multiple reads and writes in a single transaction:

let {vessels, staff} = await [
  // First read vessels
  Query.Read(_ => {...Query.makeRead(), vessels: All}),

  // Then write staff based on previous results
  Query.Write(response => {
    let vesselCount = response.vessels->Array.length
    {
      ...Query.makeWrite(),
      staff: [Staff.save({
        id: "s1",
        name: "Captain",
        count: vesselCount
      })]
    }
  })
]->Query.do

Console.log2("Results:", {vessels, staff})

Map - Transform and Update

Read records, transform them, and write back in a single transaction:

// Update all vessels
await {
  vessels: All,
  staff: NoOp
}->Query.map({
  vessels: vessel => Update({...vessel, age: vessel.age + 1})
})

// Conditional updates
await {
  vessels: All,
  staff: NoOp
}->Query.map({
  vessels: vessel =>
    vessel.age < 18 ? Delete : Update({...vessel, flag: Some("adult")})
})

// Transform specific records
await {
  vessels: In(["v1", "v2", "v3"]),
  staff: NoOp
}->Query.map({
  vessels: vessel => Update({...vessel, name: vessel.name ++ " (Updated)"})
})

Map commands:

  • Next - Skip this record, continue to next
  • Update(record) - Update the record and continue
  • Delete - Delete the record and continue
  • Stop - Stop iteration immediately

Aggregate - Reduce Over Records

Reduce records to a single value:

// Sum ages
let totalAge = await {
  vessels: All,
  staff: NoOp
}->Query.aggregate(0, {
  vessels: (sum, vessel) => (sum + vessel.age, Next)
})

// Count records
let count = await {
  vessels: Gte(#age, "18"),
  staff: NoOp
}->Query.aggregate(0, {
  vessels: (count, _vessel) => (count + 1, Next)
})

// Build custom data structure
let byFlag = await {
  vessels: All,
  staff: NoOp
}->Query.aggregate(Map.String.empty, {
  vessels: (acc, vessel) => {
    switch vessel.flag {
    | Some(flag) => (acc->Map.String.set(flag, vessel), Next)
    | None => (acc, Next)
    }
  }
})

// Early termination
let firstOld = await {
  vessels: All,
  staff: NoOp
}->Query.aggregate(None, {
  vessels: (result, vessel) =>
    vessel.age >= 100 ? (Some(vessel), Stop) : (result, Next)
})

Aggregate flow:

  • Next - Continue to next record
  • Stop - Stop iteration and return current state

Batch Operations

Execute multiple write operations in a single transaction for 20-60× performance improvement:

// Bulk save
await [
  Query.Write({
    ...Query.makeWrite(),
    vessels: vessels->Array.map(Vessel.save),
    staff: staff->Array.map(Staff.save)
  })
]->Query.batch

// Bulk delete
await [
  Query.Write({
    ...Query.makeWrite(),
    vessels: idsToDelete->Array.map(id => Vessel.Delete(id))
  })
]->Query.batch

// Batch map operations
await [
  Query.Map(
    {...Query.makeRead(), vessels: All},
    {...Query.makeMapper(), vessels: vessel => Update({...vessel, age: vessel.age + 1})}
  )
]->Query.batch

// Mix Write and Map
await [
  Query.Write({
    ...Query.makeWrite(),
    vessels: newVessels->Array.map(Vessel.save)
  }),
  Query.Map(
    {...Query.makeRead(), vessels: In(existingIds)},
    {...Query.makeMapper(), vessels: vessel => Update({...vessel, flag: Some("updated")})}
  )
]->Query.batch

When to use batch:

  • Processing 1,000+ operations
  • Event sourcing / event replay
  • Data synchronization
  • Bulk imports/exports
  • Any write-heavy workload

Performance comparison:

// ❌ Slow: 10,000 operations = ~60 seconds
for event in events {
  await Query.write({...Query.makeWrite(), vessels: [processEvent(event)]})
}

// ✅ Fast: 10,000 operations = ~1-3 seconds
await [
  Query.Write({
    ...Query.makeWrite(),
    vessels: events->Array.map(processEvent)
  })
]->Query.batch

Query Expressions

ReIndexed supports a rich query language for filtering records:

Basic Queries

All                          // All records
Get("id")                    // Single record by ID
In(["id1", "id2", "id3"])    // Records matching IDs
NotIn(["id1", "id2"])        // Records not matching IDs
NoOp                         // No operation (skip this store)

Index Queries

Is(#name, "Aurora")                    // Exact match
NotNull(#flag)                         // Has non-null value
Lt(#age, "18")                         // Less than
Lte(#age, "18")                        // Less than or equal
Gt(#age, "65")                         // Greater than
Gte(#age, "18")                        // Greater than or equal
Between(#age, Incl("18"), Excl("65"))  // Range (inclusive/exclusive bounds)
AnyOf(#flag, ["us", "ca", "uk"])       // Match any of values
NoneOf(#flag, ["de", "fr"])            // Match none of values
StartsWith(#name, "MS ")               // String prefix match

Aggregation Queries

Min(#age)    // Record with minimum age
Max(#age)    // Record with maximum age

Compound Queries

// AND - Records matching both conditions
And(
  Gte(#age, "18"),
  Lt(#age, "65")
)

// OR - Records matching either condition
Or(
  Is(#flag, "us"),
  Is(#flag, "ca")
)

// Complex combinations
And(
  Or(
    Is(#flag, "us"),
    Is(#flag, "ca")
  ),
  Gte(#age, "18")
)

Pagination

// Limit results
Limit(10, All)

// Skip and limit
Offset(20, Limit(10, All))

// Can be combined with any query
Limit(5, And(
  Gte(#age, "18"),
  Is(#flag, "us")
))

Error Handling

ReIndexed provides both unsafe (exception-throwing) and safe (Result-based) APIs:

Unsafe API (Default)

// Throws exception on error
let {vessels} = await Query.read({...Query.makeRead(), vessels: All})

Safe API

// Returns Result<response, exn>
switch await Query.Safe.read({...Query.makeRead(), vessels: All}) {
| Ok({vessels}) => Console.log2("Success:", vessels)
| Error(exn) => Console.error2("Failed:", exn)
}

// All operations have Safe variants
switch await Query.Safe.write({...Query.makeWrite(), vessels: [...]}) {
| Ok(_) => Console.log("Write succeeded")
| Error(exn) => Console.error2("Write failed:", exn)
}

switch await Query.Safe.map({vessels: All, staff: NoOp}, {...}) {
| Ok() => Console.log("Map succeeded")
| Error(exn) => Console.error2("Map failed:", exn)
}

switch await Query.Safe.batch([...]) {
| Ok() => Console.log("Batch succeeded")
| Error(exn) => Console.error2("Batch failed:", exn)
}

Advanced Topics

Database Connection Management

// Connect
switch await Database.connect("my-database") {
| Ok(db) => Console.log("Connected")
| Error(e) => Console.error2("Connection failed:", e)
}

// Disconnect
Database.disconnect()

// Drop database (⚠️ destroys all data)
switch await Database.drop() {
| Ok() => Console.log("Database dropped")
| Error(e) => Console.error2("Drop failed:", e)
}

Unbound Queries

For working with multiple database instances:

module UnboundQuery = ReIndexed.MakeUnboundQuery(QueryDef)

// Use with specific database instance
let {vessels} = await UnboundQuery.read(
  db,
  {...UnboundQuery.makeRead(), vessels: All}
)

Transaction Patterns

For lower-level transaction control, use Database.Patterns:

module VesselCounter = Patterns.MakeCounter({
  type t = Vessel.t
  let storeName = "vessels"
  let predicate = _ => true
})

switch Patterns.transaction(["vessels"], #readonly) {
| Error(msg) => Console.error(msg)
| Ok(transaction) => {
    let count = await VesselCounter.do(transaction)
    Console.log2("Vessel count:", count)
  }
}

Custom Identifiers

Create custom ID types with validation:

module VesselId: ReIndexed.Identifier = {
  type t

  let fromString = str => {
    // Validate format
    if !Js.Re.test_(str, %re("/^v-[0-9a-f]+$/")) {
      JsError.throwWithMessage("Invalid vessel ID format")
    }
    str->Obj.magic
  }

  external toString: t => string = "%identity"

  let manyFromString = ids => ids->Array.map(fromString)
  let manyToString = ids => ids->Array.map(toString)
}

API Stability

The ReIndexed module API is stable and follows semantic versioning. Breaking changes will only occur in major version bumps.

The ReIndexedPatterns and IDB.Migration.Utils modules are experimental and may have breaking changes in minor versions.

Performance Tips

  1. Use batch operations for bulk writes (20-60× faster)
  2. Create indexes on frequently queried fields
  3. Use In() or AnyOf() instead of Or() when possible (uses efficient cursor seeking)
  4. Limit results early with Limit() to avoid processing unnecessary records
  5. Use aggregate instead of reading all records when you only need a computed value
  6. Use NoOp for stores you don't need to query

Examples

See the test suite for comprehensive examples.

Live tests: https://kaiko-systems.gitlab.io/ReIndexed/

API Reference

Core Modules

  • ReIndexed.MakeModel - Create a model with string IDs
  • ReIndexed.MakeIdModel - Create a model with custom ID types
  • ReIndexed.MakeDatabase - Create a database with migrations
  • Database.MakeQuery - Create bound query interface
  • ReIndexed.MakeUnboundQuery - Create unbound query interface

Query Operations

  • read(read) - Read from object stores
  • write(write) - Write to object stores
  • do(array<query>) - Execute combined read/write operations
  • map(read, mapper) - Transform and update records
  • aggregate(read, state, aggregator) - Reduce records to a value
  • batch(array<batchOp>) - Bulk write operations in single transaction

Commands

Map commands:

  • Next - Continue without changes
  • Update(record) - Update and continue
  • Delete - Delete and continue
  • Stop - Stop iteration

Aggregate flow:

  • Next - Continue
  • Stop - Stop and return

Write operations:

  • Save(record) - Insert or update record
  • Delete(id) - Delete by ID
  • Clear - Clear all records in store

License

MIT

Contributing

Issues and pull requests welcome at https://gitlab.com/kaiko-systems/ReIndexed