npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

masterrecord

v0.2.36

Published

An Object-relational mapping for the Master framework. Master Record connects classes to relational database tables to establish a database with almost zero-configuration

Downloads

282

Readme

MasterRecord

MasterRecord is a lightweight, code-first ORM and migration tool for Node.js with a fluent query API. It lets you define entities in JavaScript, generate migrations, and query with expressive syntax.

  • Supported databases: MySQL, SQLite
  • Synchronous API by default (no await needed)
  • Built-in CLI for migrations and seeding

Installation

# npm
npm install -g masterrecord

# pnpm
pnpm add -g masterrecord

# yarn
yarn global add masterrecord

Quick Start

  1. Create an environment config file (see Environment below), then enable migrations:
masterrecord enable-migrations AppContext
  1. Make or change your entities, then create a migration file:
 masterrecord add-migration Init AppContext
  1. Apply the migration to your database:
master=development masterrecord update-database AppContext

Enable migrations (one-time per Context)

  • Run from the project root where your Context file lives. Use the Context file name (without extension) as the argument.
master=development masterrecord enable-migrations AppContext

This creates db/migrations/<context>_contextSnapShot.json and the db/migrations directory.

Create a migration

  • After you change your entity models, generate a migration file:
master=development masterrecord add-migration <MigrationName> AppContext

This writes a new file to db/migrations/<timestamp>_<MigrationName>_migration.js.

Apply migrations to the database

  • Apply only the latest pending migration:
master=development masterrecord update-database AppContext
  • Apply all migrations from the beginning (useful for a clean DB):
master=development masterrecord update-database-restart AppContext
  • List migration files (debug/inspection):
master=development masterrecord get-migrations AppContext

Notes:

  • The CLI searches for <context>_contextSnapShot.json under db/migrations relative to your current working directory.
  • For MySQL, ensure your credentials allow DDL. For SQLite, the data directory is created if missing.

Updating the running server

General flow to roll out schema changes:

  • Stop the server or put it into maintenance mode (optional but recommended for non-backward-compatible changes).
  • Pull the latest code (containing updated models and generated migration files).
  • Run migrations against the target environment:
master=production masterrecord update-database AppContext
  • Restart your server/process manager (e.g., pm2 restart <app>, docker compose up -d, or your platform’s restart command).

Backward-compatible rollout tip:

  • If possible, deploy additive changes first (new tables/columns), release app code that begins using them, then later clean up/removal migrations.

Troubleshooting

  • Cannot find Context file: ensure you run commands from the app root and pass the correct Context file name used when defining your class (case-insensitive in the snapshot, but supply the same name you used).
  • Cannot connect to DB: confirm master=<env> is set and env.<env>.json exists with correct credentials and paths.
  • MySQL type mismatches: the migration engine maps MasterRecord types to SQL types; verify your entity field type values are correct.

Recent improvements (2025-10)

  • Type validation and coercion (Entity Framework-style):
    • INSERT and UPDATE operations now validate field types against entity definitions.
    • Auto-converts compatible types with warnings (e.g., string "4" → integer 4).
    • Throws clear errors for incompatible types with detailed context.
    • Prevents silent failures where fields were skipped due to type mismatches.
    • See Type Validation section below for details.
  • Query language and SQL engines:
    • Correct parsing of multi-char operators (>=, <=, ===, !==) and spaced logical operators.
    • Support for grouped OR conditions rendered as parenthesized OR in WHERE across SQLite/MySQL.
    • Resilient fallback for partially parsed expressions.
  • Relationships:
    • hasManyThrough supported in insert and delete cascades.
  • Environment file discovery:
    • Context now walks up directories to find config/environments/env.<env>.json; fixed error throwing.
  • Migrations (DDL generation):
    • Default values emitted for SQLite/MySQL (including boolean coercion).
    • CREATE TABLE IF NOT EXISTS to avoid failures when rerunning.
    • Table introspection added; existing tables are synced: missing columns are added, MySQL applies ALTER ... MODIFY for NULL/DEFAULT changes, SQLite rebuilds table when necessary.
  • Migration API additions in schema.js:
    • renameColumn(table) implemented for SQLite/MySQL.
    • seed(tableName, rows) implemented for bulk/single inserts with safe quoting.

Using renameColumn and seed in migrations

Basic migration skeleton (generated by CLI):

var masterrecord = require('masterrecord');

class AddSettings extends masterrecord.schema { 
  constructor(context){ super(context); }

  up(table){
    this.init(table);
    // Add a new table
    this.createTable(table.MailSettings);

    // Rename a column on an existing table
    this.renameColumn({ tableName: 'MailSettings', name: 'from_email', newName: 'reply_to' });

    // Seed initial data (single row)
    this.seed('MailSettings', {
      from_name: 'System',
      reply_to: '[email protected]',
      return_path_matches_from: 0,
      weekly_summary_enabled: 0,
      created_at: Date.now(),
      updated_at: Date.now()
    });

    // Seed multiple rows
    this.seed('MailSettings', [
      { from_name: 'Support', reply_to: '[email protected]', created_at: Date.now(), updated_at: Date.now() },
      { from_name: 'Marketing', reply_to: '[email protected]', created_at: Date.now(), updated_at: Date.now() }
    ]);
  }

  down(table){
    this.init(table);
    // Revert the rename
    this.renameColumn({ tableName: 'MailSettings', name: 'reply_to', newName: 'from_email' });

    // Optionally clean up seeded rows
    // this.context._execute("DELETE FROM MailSettings WHERE reply_to IN ('[email protected]','[email protected]','[email protected]')");

    // Drop table if that was part of up
    // this.dropTable(table.MailSettings);
  }
}
module.exports = AddSettings;

Notes:

  • renameColumn expects an object: { tableName, name, newName } and works in both SQLite and MySQL.
  • seed(tableName, rows) accepts:
    • a single object: { col: value, ... }
    • or an array of objects: [{...}, {...}] Values are auto-quoted; booleans become 1/0.
  • When a table already exists, update-database will sync schema:
    • Add missing columns.
    • MySQL: adjust default/nullability via ALTER ... MODIFY.
    • SQLite: rebuilds the table when nullability/default/type changes require it.

Tips

  • Prefer additive changes (add columns) before destructive changes (drops/renames) to minimize downtime.
  • For large SQLite tables, a rebuild copies data; consider maintenance windows.
  • Use master=development masterrecord get-migrations AppContext to inspect migration order.

Type Validation

MasterRecord now validates and coerces field types during INSERT and UPDATE operations, similar to Entity Framework. This prevents silent failures where fields were skipped due to type mismatches.

How it works

When you assign a value to an entity field, MasterRecord:

  1. Validates the value against the field's type definition
  2. Auto-converts compatible types with console warnings
  3. Throws clear errors for incompatible types

Type conversion rules

Integer fields (db.integer())

  • Accepts: integer numbers
  • ⚠️ Auto-converts with warning:
    • Float → integer (rounds: 3.74)
    • Valid string → integer ("42"42)
    • Boolean → integer (true1, false0)
  • Throws error: invalid strings ("abc")

String fields (db.string())

  • Accepts: strings
  • ⚠️ Auto-converts with warning:
    • Number → string (42"42")
    • Boolean → string (true"true")
  • Throws error: objects, arrays

Boolean fields (db.boolean())

  • Accepts: booleans
  • ⚠️ Auto-converts with warning:
    • Number → boolean (0false, others → true)
    • String → boolean ("true"/"1"/"yes"true, "false"/"0"/"no"/""false)
  • Throws error: invalid strings, objects

Time fields (db.time(), timestamps)

  • Accepts: strings or numbers
  • Throws error: objects, booleans

Example warnings and errors

Auto-conversion warning (non-breaking):

const chunk = new DocumentChunk();
chunk.document_id = "4"; // string assigned to integer field
context.DocumentChunk.add(chunk);
context.saveChanges();

Console output:

⚠️  Field DocumentChunk.document_id: Auto-converting string "4" to integer 4

Type mismatch error (breaks execution):

const chunk = new DocumentChunk();
chunk.document_id = "invalid"; // non-numeric string
context.DocumentChunk.add(chunk);
context.saveChanges(); // throws error

Error thrown:

INSERT failed: Type mismatch for DocumentChunk.document_id: Expected integer, got string "invalid" which cannot be converted to a number

Migration from older versions

If your code relies on implicit type coercion that was previously silent:

  • No breaking changes: Compatible types are still auto-converted
  • New warnings: You'll see console warnings for auto-conversions
  • New errors: Incompatible types that were silently skipped now throw errors

Recommendation: Review warnings and fix type mismatches in your code for cleaner, more predictable behavior.

Benefits

  1. No more silent field skipping: Previously, if you assigned a string to an integer field, the ORM would silently skip it in the INSERT/UPDATE statement. Now you get immediate feedback.

  2. Clear error messages: Errors include entity name, field name, expected type, actual type, and the problematic value.

  3. Predictable behavior: Auto-conversions match common patterns (e.g., database IDs returned as strings from some drivers are converted to integers).

  4. Better debugging: Type issues are caught at save time, not when you query the data later.

Multi-context (multi-database) projects

When your project defines multiple Context files (e.g., userContext.js, modelContext.js, mailContext.js, chatContext.js) across different packages or feature directories, MasterRecord can auto-detect and operate on all of them.

New bulk commands

  • enable-migrations-all (alias: ema)

    • Scans the project for MasterRecord Context files (heuristic) and enables migrations for each by writing a portable snapshot next to the context at <ContextDir>/db/migrations/<context>_contextSnapShot.json.
  • add-migration-all (alias: ama)

    • Creates a migration named <Name> (e.g., Init) for every detected context that has a snapshot. Migrations are written into each context’s own migrations folder.
  • update-database-all (alias: uda)

    • Applies the latest migration for every detected context with migrations.
  • update-database-down (alias: udd)

    • Runs the latest migration’s down() for the specified context.
  • update-database-target (alias: udt)

    • Rolls back migrations newer than the given migration file within that context’s migrations folder.
  • ensure-database (alias: ed)

    • For MySQL contexts, ensures the database exists (like EF’s Database.EnsureCreated). Auto-detects connection info from your Context env settings.

Portable snapshots (no hardcoded absolute paths)

Snapshots are written with relative paths, so moving/renaming the project root does not break CLI resolution:

  • contextLocation: path from the migrations folder to the Context file
  • migrationFolder: . (the snapshot resides in the migrations folder)
  • snapShotLocation: the snapshot filename

Typical flow for multiple contexts

  1. Enable migrations everywhere:
# macOS/Linux
master=development masterrecord enable-migrations-all

# Windows PowerShell
$env:master = 'development'
masterrecord enable-migrations-all
  1. Create an initial migration for all contexts:
# macOS/Linux
master=development masterrecord add-migration-all Init

# Windows PowerShell
$env:master = 'development'
masterrecord add-migration-all Init
  1. Apply migrations everywhere:
# macOS/Linux
master=development masterrecord update-database-all

# Windows PowerShell
$env:master = 'development'
masterrecord update-database-all
  1. Inspect migrations for a specific context:
# macOS/Linux
master=development masterrecord get-migrations userContext

# Windows PowerShell
$env:master = 'development'
masterrecord get-migrations userContext
  1. Roll back latest for a specific context:
# macOS/Linux
master=development masterrecord update-database-down userContext

# Windows PowerShell
$env:master = 'development'
masterrecord update-database-down userContext

Environment selection (cross-platform)

  • macOS/Linux prefix: master=development ... or NODE_ENV=development ...
  • Windows PowerShell:
$env:master = 'development'
masterrecord update-database-all
  • Windows cmd.exe:
set master=development && masterrecord update-database-all

Notes and tips

  • Each Context should define its own env settings and tables; update-database-all operates context-by-context so separate databases are handled cleanly.
  • For SQLite contexts, the connection path will be created if the directory does not exist.
  • For MySQL contexts, ensure-database <ContextName> can create the DB (permissions required) before migrations run.
  • If you rename/move the project root, re-run enable-migrations-all or any single-context command once; snapshots use relative paths and will continue working.
  • If update-database-all reports "no migration files found" for a context, run get-migrations <ContextName>. If empty, create a migration with add-migration <Name> <ContextName> or use add-migration-all <Name>.

Table Prefixes

MasterRecord supports automatic table prefixing for both MySQL and SQLite databases. This is useful for:

  • Multi-tenant applications sharing a single database
  • Plugin systems where each plugin needs isolated tables
  • Avoiding table name conflicts in shared database environments

Using tablePrefix

Set the tablePrefix property in your Context constructor before calling dbset():

var masterrecord = require('masterrecord');
const User = require('./models/User');
const Post = require('./models/Post');

class AppContext extends masterrecord.context {
    constructor() {
        super();

        // Set table prefix
        this.tablePrefix = 'myapp_';

        // Configure environment
        this.env('config/environments');

        // Register models - prefix will be automatically applied
        this.dbset(User);      // Creates table: myapp_User
        this.dbset(Post);      // Creates table: myapp_Post
    }
}

module.exports = AppContext;

How it works

When tablePrefix is set:

  1. The prefix is automatically prepended to all table names during dbset() registration
  2. Works with both the default table name (model class name) and custom names
  3. Applies to all database operations: queries, inserts, updates, deletes, and migrations
  4. Supports both MySQL and SQLite databases

Example with custom table names

class AppContext extends masterrecord.context {
    constructor() {
        super();
        this.tablePrefix = 'myapp_';
        this.env('config/environments');

        // Custom table name + prefix
        this.dbset(User, 'users');       // Creates table: myapp_users
        this.dbset(Post, 'blog_posts');  // Creates table: myapp_blog_posts
    }
}

Plugin example

Perfect for plugin systems where each plugin needs isolated tables:

// RAG Plugin Context
class RagContext extends masterrecord.context {
    constructor() {
        super();

        // Prefix all RAG plugin tables
        this.tablePrefix = 'rag_';

        this.env(path.join(__dirname, '../../config/environments'));

        this.dbset(Document);       // Creates table: rag_Document
        this.dbset(DocumentChunk);  // Creates table: rag_DocumentChunk
        this.dbset(Settings);       // Creates table: rag_Settings
    }
}

Migrations with table prefixes

Table prefixes work seamlessly with migrations:

# Enable migrations (prefix is read from your Context)
master=development masterrecord enable-migrations AppContext

# Create migration (tables will have prefix in migration file)
master=development masterrecord add-migration Init AppContext

# Apply migration (creates prefixed tables)
master=development masterrecord update-database AppContext

The generated migration files will reference the prefixed table names, so you don't need to manually add prefixes in your migration code.

Notes

  • The prefix is applied during Context construction, so it must be set before dbset() calls
  • The prefix is stored in migration snapshots, ensuring consistency across migration operations
  • Empty strings or non-string values are ignored (no prefix applied)
  • Both MySQL and SQLite fully support table prefixes with no special configuration needed

Query Method Chaining

MasterRecord supports fluent query chaining for building complex queries. You can chain multiple where(), orderBy(), skip(), take(), and other methods together to build your query dynamically.

Chaining Multiple where() Clauses

Multiple where() calls are automatically combined with AND logic:

// Build query dynamically
let query = context.QaTask;

// Add first condition
query = query.where(t => t.assigned_worker_id == $$, currentUser.id);

// Add second condition (combines with AND)
query = query.where(t => t.status == $$, 'pending');

// Add ordering and execute
let tasks = query.orderBy(t => t.created_at).toList();

Generated SQL:

SELECT * FROM QaTask AS t
WHERE t.assigned_worker_id = 123
  AND t.status = 'pending'
ORDER BY t.created_at ASC

Dynamic Query Building

This is especially useful for building queries based on conditional logic:

let query = context.User;

// Always apply base filter
query = query.where(u => u.is_active == true);

// Conditionally add filters
if (searchTerm) {
    query = query.where(u => u.name.like($$), `%${searchTerm}%`);
}

if (roleFilter) {
    query = query.where(u => u.role == $$, roleFilter);
}

// Add pagination
query = query
    .orderBy(u => u.created_at)
    .skip(offset)
    .take(limit);

// Execute query
let users = query.toList();

Chainable Query Methods

All of these methods return the query builder and can be chained:

  • where(query, ...args) - Add WHERE condition (multiple calls combine with AND)
  • and(query, ...args) - Explicitly add AND condition (alternative to chaining where)
  • orderBy(query, ...args) - Sort ascending
  • orderByDescending(query, ...args) - Sort descending
  • skip(number) - Skip N records (pagination offset)
  • take(number) - Limit to N records (pagination limit)
  • select(query, ...args) - Select specific fields
  • include(query, ...args) - Eager load relationships

Combining with OR Logic

For OR conditions within a single where clause, use the || operator:

// Single where with OR
let tasks = context.Task
    .where(t => t.status == 'pending' || t.status == 'in_progress')
    .toList();

Generated SQL:

SELECT * FROM Task AS t
WHERE (t.status = 'pending' OR t.status = 'in_progress')

Complex Example

// Complex query with multiple conditions
let query = context.Order;

// Base filters
query = query.where(o => o.customer_id == $$, customerId);
query = query.where(o => o.status == $$ || o.status == $$, 'pending', 'processing');

// Date range filter
if (startDate) {
    query = query.where(o => o.created_at >= $$, startDate);
}
if (endDate) {
    query = query.where(o => o.created_at <= $$, endDate);
}

// Sorting and pagination
let orders = query
    .orderByDescending(o => o.created_at)
    .skip(page * pageSize)
    .take(pageSize)
    .toList();

Important Notes

  • Each where() call adds an AND condition to the existing WHERE clause
  • Conditions are combined in the order they're added
  • The query is only executed when you call a terminal method: toList(), single(), count()
  • Query builders are reusable - calling toList() resets the builder for the next query