npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

nanodb-orm

v0.2.4

Published

Lightweight ORM wrapper for Drizzle with auto-migrations, schema introspection, and CLI

Readme

nanodb-orm

A lightweight ORM wrapper for Drizzle ORM with automatic migrations, schema introspection, CLI tools, and support for SQLite/Turso databases.

Features

  • TypeScript First — Full type inference from schema to queries
  • Auto-Migrations — Automatically creates and updates database schemas from Drizzle tables
  • Schema Introspection — Comprehensive schema analysis and validation
  • Multi-Database — Works with local SQLite and remote Turso databases
  • Transactions — Full transaction support with automatic rollback
  • CLI Tools — Built-in commands including Drizzle Studio integration
  • Plugin System — Extensible with hooks for audit, validation, transformations
  • Minimal — ~1K lines of code, zero bloat

Installation

npm install nanodb-orm

Monorepo

This repo is managed as an npm workspace with core + plugin packages. See PACKAGES.md for layout and cross-package commands. For upcoming hardening scope, see V0_0_9_PLAN.md. Operational docs:

  • docs/PRODUCTION_CHECKLIST.md
  • docs/PLUGIN_AUTHOR_GUIDE.md
  • docs/UPGRADING.md
  • docs/RELEASE_FLOW.md

Important: All interactions should go through nanodb-orm.
Use nanodb-orm exports for schema, queries, and utilities instead of importing from Drizzle directly.

Optional plugins:

npm install @nanodb-orm/plugin-auth @nanodb-orm/plugin-timestamps @nanodb-orm/plugin-soft-delete @nanodb-orm/plugin-logger @nanodb-orm/plugin-schema-guard

Plugin preview rule: multi-plugin mode requires preview: { deterministicPluginOrdering: true }. Without this preview flag, createDatabase() accepts only one plugin.

Preview features: embedded replicas and concurrent write transaction mode (both opt-in).

Quick Start

Define Your Schema

import nanodb from 'nanodb-orm';

const users = nanodb.schema.table('users', {
  id: nanodb.schema.integer('id').primaryKey({ autoIncrement: true }),
  name: nanodb.schema.text('name').notNull(),
  email: nanodb.schema.text('email').unique().notNull(),
  age: nanodb.schema.integer('age'),
});

const posts = nanodb.schema.table('posts', {
  id: nanodb.schema.integer('id').primaryKey({ autoIncrement: true }),
  title: nanodb.schema.text('title').notNull(),
  userId: nanodb.schema.integer('userId').notNull(),
});

Create Database

const db = await nanodb.createDatabase({
  tables: { users, posts },
  seedData: {
    users: [{ name: 'Alice', email: '[email protected]', age: 28 }],
  },
});

export { db };

Query Your Data

// SELECT
const allUsers = await db.select().from(users);
const adults = await db.select().from(users).where(nanodb.query.gte(users.age, 18));

// INSERT
await db.insert(users).values({ name: 'Bob', email: '[email protected]' });

// UPDATE
await db.update(users).set({ name: 'Robert' }).where(nanodb.query.eq(users.email, '[email protected]'));

// DELETE
await db.delete(users).where(nanodb.query.eq(users.email, '[email protected]'));

Import Styles

// Default import (recommended)
import nanodb from 'nanodb-orm';

const users = nanodb.schema.table('users', { ... });
const db = await nanodb.createDatabase({ tables: { users } });
await db.select().from(users).where(nanodb.query.eq(users.id, 1));
// Named imports
import { createDatabase, schema, query } from 'nanodb-orm';
// Individual imports (tree-shakeable)
import { createDatabase, table, integer, text, eq } from 'nanodb-orm';

CLI

nanodb-orm includes a CLI for common database operations:

# Launch Drizzle Studio (visual database browser)
npx nanodb studio

# With custom port
npx nanodb studio --port 3000

# With specific database file
npx nanodb studio --db ./data/myapp.db

# Other commands
npx nanodb setup      # Initialize schema and seed data
npx nanodb reset      # Drop all tables and recreate
npx nanodb status     # Show database health and stats
npx nanodb validate   # Validate schema against database
npx nanodb validate --deep   # Deep structural drift validation
npx nanodb schema inspect    # Print full schema metadata JSON
npx nanodb schema inspect --no-deep  # Fast inspect without deep drift checks
npx nanodb help       # Show all commands

Drizzle Studio

Launch a visual database browser at https://local.drizzle.studio:

npx nanodb studio

Drizzle Studio

Type Inference

nanodb-orm provides full type inference from your schema:

import { 
  createDatabase, 
  table, 
  integer, 
  text,
  type SelectModel,
  type InsertModel,
} from 'nanodb-orm';

const users = table('users', {
  id: integer('id').primaryKey({ autoIncrement: true }),
  name: text('name').notNull(),
  email: text('email').notNull(),
  age: integer('age'),
});

// Infer types directly from your table definitions
type User = SelectModel<typeof users>;
// { id: number; name: string; email: string; age: number | null }

type NewUser = InsertModel<typeof users>;
// { id?: number; name: string; email: string; age?: number | null }

// The database is fully typed
const db = await createDatabase({ tables: { users } });

// All operations are type-safe
const allUsers: User[] = await db.select().from(users);

// Seed data is type-checked at compile time
const db2 = await createDatabase({
  tables: { users },
  seedData: {
    users: [
      { name: 'Alice', email: '[email protected]' }, // ✓ Valid
      // { name: 123 }, // ✗ TypeScript error!
    ],
  },
});

Available Type Utilities

| Type | Description | |------|-------------| | SelectModel<T> | Infer the row type (SELECT result) from a table | | InsertModel<T> | Infer the insert type from a table (optional auto-generated columns) | | SchemaModels<S> | Extract all row types from a schema object | | SchemaInsertModels<S> | Extract all insert types from a schema | | NanoDatabase<S> | The typed database instance | | Schema | Type for schema objects | | AnyTable | Type constraint for Drizzle tables |

API Reference

createDatabase(config)

Create and initialize database. Returns db with all utilities attached.

const db = await createDatabase({
  tables: { users, posts },
  seedData: { users: [...] },  // Type-checked against schema
  migrationConfig: {
    preserveData: true,   // default: true
    autoMigrate: true,    // default: true
    dropTables: false,    // default: false
    dryRun: false,        // default: false (plan-only, no schema writes)
    backupBeforeMigrate: false, // optional backup before applying schema changes
    backupPath: '',       // optional backup file path (local SQLite)
  },
  connectionProfile: 'default', // 'default' | 'serverless' | 'long_lived'
  useSingletonConnection: false, // default: false
  plugins: [auditPlugin], // optional
  preview: {
    deterministicPluginOrdering: true, // optional preview for multi-plugin mode
    concurrentWrites: false,           // optional preview for deferred/concurrent begin behavior
  },
});

Connection profile behavior:

  • default: non-singleton by default
  • serverless: non-singleton by default, local fallback uses in-memory SQLite
  • long_lived: singleton by default when no explicit connection override is provided

Database Operations (from db)

// Health & Status
await db.healthCheck();    // { healthy, tables, totalRecords, ... }
await db.isReady();        // true/false
await db.sync();           // Sync with Turso (if remote)

// Reset & Seed
await db.reset();          // Drop all, recreate, reseed
await db.seed();           // Re-seed data
await db.clearData();      // Delete all data (keep tables)
await db.dispose();        // Release in-memory context and non-singleton client

Schema Introspection (from db.schema)

db.schema.tables();              // ['users', 'posts'] - typed as (keyof Schema)[]
db.schema.getTable('users');     // full structural metadata
db.schema.getColumns('users');   // ['id', 'name', 'email']
db.schema.getIndexes('users');   // legacy index names
db.schema.getIndexDetails('users');
db.schema.getIndexByName('users', 'users_email_idx');
db.schema.getIndexesByColumn('users', 'email');
db.schema.getForeignKeys('posts');
db.schema.getForeignKeyConstraints('posts');
db.schema.getUniqueConstraints('users');
db.schema.getCheckConstraints('users');
db.schema.references('users');       // { incoming, outgoing }
db.schema.references('users', 'id'); // filter references by column
await db.schema.validate();      // { isValid, missingTables, ... }
await db.schema.validate({ deep: true }); // structural drift checks
db.schema.stats();               // Full schema statistics
db.schema.relationships();       // Foreign key relationships

Migrations (from db.migrations)

await db.migrations.run();         // Run pending migrations
await db.migrations.validate();    // Validate schema vs DB
await db.migrations.checkTables(); // { users: true, posts: true }
await db.migrations.status();      // Applied migration history
await db.migrations.plan();        // Preview per-table migration actions

Data-Preserving Auto Migrations

nanodb-orm automatically migrates your schema while preserving existing data. When you change your schema (add/remove columns, change types), the migration:

  1. Creates a temporary table with the new schema
  2. Copies data from matching columns
  3. Drops the old table
  4. Renames the temp table
const db = await createDatabase({
  tables: { users, posts },
  migrationConfig: {
    autoMigrate: true,    // Enable automatic migrations (default: true)
    preserveData: true,   // Preserve existing data during migration (default: true)
    dropTables: false,    // Allow destructive drop & recreate (default: false)
  },
});

Migration Config Options

| Option | Default | Description | |--------|---------|-------------| | autoMigrate | true | Master switch - enables/disables automatic schema changes | | preserveData | true | Copy existing data to new schema during migration | | dropTables | false | Allow destructive operations (bypasses data preservation) | | dryRun | false | Plan migrations without applying schema changes | | backupBeforeMigrate | false | Create backup before applying schema changes | | backupPath | '' | Backup output path (if omitted, timestamped file is used) |

How Column Changes Work

Adding a column:
  OLD: (id, name) → NEW: (id, name, email)
  ✅ All rows preserved, 'email' starts as NULL/default

Removing a column:
  OLD: (id, name, oldField) → NEW: (id, name)
  ⚠️ Rows preserved, but 'oldField' data is LOST

Renaming a column:
  OLD: (id, username) → NEW: (id, name)
  ⚠️ Treated as remove + add, 'username' data is LOST

Manual Migration

import { migrateTablePreservingData } from 'nanodb-orm';

const result = await migrateTablePreservingData(
  'users',
  'CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, email TEXT)',
  ['id', 'name'],           // old columns
  ['id', 'name', 'email']   // new columns
);
// { rowsMigrated: 150, columnsPreserved: ['id', 'name'] }

transaction(fn) / batch(statements)

Execute operations atomically. Uses Drizzle's native transaction when available (better for Turso). For local/manual fallback paths, preview.concurrentWrites enables experimental begin behavior (BEGIN CONCURRENT with safe fallback).

import nanodb from 'nanodb-orm';

// Transaction with custom logic
const result = await nanodb.transaction(async (tx) => {
  await tx.run(nanodb.query.sql`INSERT INTO users (name) VALUES ('Alice')`);
  await tx.run(nanodb.query.sql`INSERT INTO posts (title, userId) VALUES ('Hello', 1)`);
  return { created: true };
});

if (result.success) {
  console.log(result.result); // { created: true }
} else {
  console.log('Rolled back:', result.error?.message);
}

// Batch multiple statements (simpler for bulk operations)
const batchResult = await nanodb.batch([
  nanodb.query.sql`INSERT INTO users (name) VALUES ('Bob')`,
  nanodb.query.sql`INSERT INTO users (name) VALUES ('Carol')`,
]);
// Enable experimental concurrent write begin behavior (preview)
const db = await nanodb.createDatabase({
  tables: { users },
  connection: { forceLocal: true, databasePath: './database.db' },
  preview: { concurrentWrites: true },
});

parseDbError(error, context)

Parse SQLite errors into user-friendly messages.

import { parseDbError } from 'nanodb-orm';

try {
  await db.insert(users).values({ email: '[email protected]' });
} catch (error) {
  const parsed = parseDbError(error, { table: 'users', operation: 'insert' });
  console.log(parsed.message); // "Duplicate value for unique column 'email'"
}

Plugins

Extend nanodb-orm with hooks that run automatically on database operations.

Official Plugin Packages

Official plugins are published as separate npm packages:

npm install @nanodb-orm/plugin-auth @nanodb-orm/plugin-timestamps @nanodb-orm/plugin-soft-delete @nanodb-orm/plugin-logger @nanodb-orm/plugin-schema-guard
import { createDatabase } from 'nanodb-orm';
import { timestamps } from '@nanodb-orm/plugin-timestamps';
import { softDelete } from '@nanodb-orm/plugin-soft-delete';
import { queryLogger } from '@nanodb-orm/plugin-logger';
import { authPlugin } from '@nanodb-orm/plugin-auth';
import { schemaGuard } from '@nanodb-orm/plugin-schema-guard';

const db = await createDatabase({
  tables: { users, posts },
  plugins: [authPlugin(), schemaGuard({ runOnReady: false })],
  preview: { deterministicPluginOrdering: true },
});

To use multiple plugins, enable deterministic ordering preview:

const db = await createDatabase({
  tables: { users, posts },
  plugins: [authPlugin(), queryLogger(), softDelete(), timestamps()],
  preview: { deterministicPluginOrdering: true },
});

Ordering rules in preview mode:

  • lower plugin.order runs first (default 0)
  • ties are sorted by plugin.name (ascending)
  • final tie-breaker is declaration order in the plugins array

| Package | Function | Description | |---------|----------|-------------| | @nanodb-orm/plugin-auth | authPlugin() | Auth helpers for user/account/session/verification tables | | @nanodb-orm/plugin-timestamps | timestamps() | Auto-add createdAt / updatedAt | | @nanodb-orm/plugin-logger | queryLogger() | Log SQL operations with timing | | @nanodb-orm/plugin-soft-delete | softDelete() | Mark as deleted instead of remove | | @nanodb-orm/plugin-schema-guard | schemaGuard() | Enforce required schema structures + deep drift assertions |

Preview: Embedded Replicas

This feature is built into nanodb-orm and guarded by a config switch.

import { createReplicatedDatabase } from 'nanodb-orm';
import { schema } from 'nanodb-orm';

const users = schema.table('users', {
  id: schema.text('id').primaryKey(),
  name: schema.text('name').notNull(),
});

const db = await createReplicatedDatabase({
  tables: { users },
  primary: { id: 'primary', connectionUrl: 'libsql://primary.turso.io', authToken: '...' },
  replicas: [
    { id: 'replica-1', connectionUrl: 'libsql://replica1.turso.io', authToken: '...' },
    { id: 'replica-2', connectionUrl: 'libsql://replica2.turso.io', authToken: '...' },
  ],
  preview: { enabled: true },
});

await db.write().insert(users).values({ id: 'user-123', name: 'Alice' });
const rows = await db.read().select().from(users);

Embedded replicas are the recommended first step before sharding.

Plugin Interface

import { NanoPlugin } from 'nanodb-orm';

const myPlugin: NanoPlugin = {
  name: 'my-plugin',
  order: 0, // optional execution priority (lower runs earlier)
  
  // Lifecycle
  install: (db) => db,           // Modify db instance
  onReady: (db) => {},           // Called after createDatabase
  onError: (err, op, table) => {},  // Called on hook errors
  
  // Auto hooks (run automatically)
  beforeInsert: (table, data) => data,   // Transform data before insert
  afterInsert: (table, data, result) => {},
  beforeUpdate: (table, data) => data,   // Transform data before update
  afterUpdate: (table, data, result) => {},
  beforeDelete: (table, condition) => condition,
  afterDelete: (table, condition, result) => {},
  
  // Query hooks (also auto-triggered)
  beforeQuery: (table, fields) => fields,
  afterQuery: (table, fields, result) => {},
};

Example Plugins

These are example plugins you can create - nanodb-orm provides the plugin system, you build the plugins:

// Example: Audit logging with timing
const timers = new Map<string, number>();
const auditPlugin: NanoPlugin = {
  name: 'audit',
  beforeInsert: (table) => { timers.set('op', performance.now()); console.log(`INSERT ${table}`); },
  afterInsert: () => { console.log(`  ↳ ${(performance.now() - timers.get('op')!).toFixed(1)}ms`); },
  beforeQuery: (table) => { timers.set('op', performance.now()); console.log(`SELECT ${table}`); },
  afterQuery: (t, _, rows) => { console.log(`  ↳ ${rows.length} rows in ${(performance.now() - timers.get('op')!).toFixed(1)}ms`); },
};

// Auto-generate slugs
const slugPlugin: NanoPlugin = {
  name: 'slug',
  beforeInsert: (table, data) => {
    if (table === 'posts' && data.title && !data.slug) {
      return { ...data, slug: data.title.toLowerCase().replace(/\s+/g, '-') };
    }
    return data;
  },
};

// Validation
const validationPlugin: NanoPlugin = {
  name: 'validation',
  beforeInsert: (table, data) => {
    if (table === 'users' && !data.email?.includes('@')) {
      throw new Error('Invalid email format');
    }
    return data;
  },
};

// Use a plugin (temporary one-plugin limit)
const db = await createDatabase({
  tables,
  plugins: [auditPlugin],
});

// Hooks run automatically
await db.insert(posts).values({ title: 'My Post' });

// Check loaded plugins
db.plugins.list(); // ['audit']

Best Practices

Recommended Project Structure

db/
├── schema.ts      # Table definitions
├── index.ts       # Database instance export
├── types.ts       # Type aliases (SelectModel, InsertModel)
├── plugins.ts     # Custom plugins
└── seeds.ts       # Seed data

1. Schema Order Matters

Define parent tables before children for correct seeding order:

// db/schema.ts
import nanodb from 'nanodb-orm';

// Parent tables first (no foreign keys)
export const users = nanodb.schema.table('users', {
  id: nanodb.schema.integer('id').primaryKey({ autoIncrement: true }),
  name: nanodb.schema.text('name').notNull(),
  email: nanodb.schema.text('email').notNull().unique(),
});

export const categories = nanodb.schema.table('categories', {
  id: nanodb.schema.integer('id').primaryKey({ autoIncrement: true }),
  name: nanodb.schema.text('name').notNull(),
});

// Child tables after (have foreign keys)
export const posts = nanodb.schema.table('posts', {
  id: nanodb.schema.integer('id').primaryKey({ autoIncrement: true }),
  title: nanodb.schema.text('title').notNull(),
  userId: nanodb.schema.integer('userId').notNull(),      // FK to users
  categoryId: nanodb.schema.integer('categoryId'),        // FK to categories
});

// Order: parents → children
export const schema = { users, categories, posts };

2. Single Database Instance

// db/index.ts
import nanodb from 'nanodb-orm';
import { schema } from './schema';
import { seedData } from './seeds';

export const db = await nanodb.createDatabase({ tables: schema, seedData });
// anywhere.ts
import { db } from './db';
import { users } from './db/schema';

const allUsers = await db.select().from(users);

3. Use Type Inference

// db/types.ts
import { type SelectModel, type InsertModel } from 'nanodb-orm';
import { users, posts } from './schema';

export type User = SelectModel<typeof users>;
export type NewUser = InsertModel<typeof users>;
export type Post = SelectModel<typeof posts>;

// Usage
async function createUser(data: NewUser): Promise<void> {
  await db.insert(users).values(data);  // TypeScript enforces shape
}

4. Prefer Grouped Imports

// ✅ Clean - default import
import nanodb from 'nanodb-orm';

const users = nanodb.schema.table('users', { ... });
await db.select().from(users).where(nanodb.query.eq(users.id, 1));

// ✅ Also good - grouped imports
import { schema, query, errors } from 'nanodb-orm';

// ❌ Avoid - many individual imports
import { table, integer, text, eq, gte, and, sql, count, ... } from 'nanodb-orm';

5. Handle Errors Gracefully

import { parseDbError, DatabaseError } from 'nanodb-orm';

try {
  await db.insert(users).values({ email: '[email protected]' });
} catch (error) {
  if (error instanceof DatabaseError) {
    // Already formatted with context
    console.log(error.message);  // "UNIQUE constraint failed: users.email"
    console.log(error.table);    // "users"
  } else {
    const parsed = parseDbError(error, { table: 'users' });
    console.log(parsed.format());
  }
}

6. Use Transactions for Atomic Operations

import nanodb from 'nanodb-orm';

const result = await nanodb.transaction(async (tx) => {
  await tx.run(nanodb.query.sql`INSERT INTO users (name) VALUES ('Alice')`);
  await tx.run(nanodb.query.sql`INSERT INTO posts (title, userId) VALUES ('Hello', 1)`);
  return { inserted: 2 };
});

if (!result.success) {
  console.log('Rolled back:', result.error?.message);
}

7. Validate on Startup (Production)

import nanodb from 'nanodb-orm';

const db = await nanodb.createDatabase({ tables: schema });

if (process.env.NODE_ENV === 'production') {
  const validation = await db.schema.validate();
  if (!validation.isValid) {
    throw new Error(`Schema mismatch: ${validation.missingTables.join(', ')}`);
  }

  const health = await db.healthCheck();
  if (!health.healthy) {
    console.warn('Database issues:', health.errors);
  }
}

8. Keep Plugins Simple

// Good: focused, single responsibility
const timestampPlugin: NanoPlugin = {
  name: 'timestamps',
  beforeInsert: (_table, data) => ({
    ...data,
    createdAt: new Date().toISOString(),
  }),
  beforeUpdate: (_table, data) => ({
    ...data,
    updatedAt: new Date().toISOString(),
  }),
};

// Avoid: complex business logic in hooks

9. Environment Configuration

# .env
TURSO_CONNECTION_URL=libsql://your-db.turso.io
TURSO_AUTH_TOKEN=your-token
FORCE_LOCAL_DB=true           # Use local SQLite
DATABASE_PATH=./data/app.db   # Custom DB path

nanodb-orm auto-detects the right database:

  • Turso: when TURSO_* vars are set
  • Local: when FORCE_LOCAL_DB=true or no Turso config
  • Test: isolated test.db when NODE_ENV=test

Configuration

Environment Variables

# Remote Turso database
TURSO_CONNECTION_URL=libsql://your-db.turso.io
TURSO_AUTH_TOKEN=your-token

# Force local SQLite
FORCE_LOCAL_DB=true

# Custom database path (works with FORCE_LOCAL_DB or as fallback)
DATABASE_PATH=./data/myapp.db

Database Selection

  • Turso — Used when TURSO_CONNECTION_URL and TURSO_AUTH_TOKEN are set
  • Local SQLite — Used when FORCE_LOCAL_DB=true or Turso credentials missing
  • Custom Path — Set DATABASE_PATH=./path/to/db.sqlite for custom location
  • Test Mode — Isolated test.db when NODE_ENV=test

Error Handling

Errors are automatically parsed into user-friendly messages:

import { DatabaseError, SchemaError, SeedError, parseDbError } from 'nanodb-orm';

try {
  await DatabaseSync.setup();
} catch (error) {
  if (error instanceof DatabaseError) {
    console.log(error.message);    // User-friendly message
    console.log(error.operation);  // 'seed', 'migration', etc.
    console.log(error.table);      // Table name if applicable
    console.log(error.detail);     // Additional context
  }
}

Error output is clean and actionable:

┌─ nanodb-orm error ─────────────────────────────
│ Column "email" does not exist
│ Table: users
│ Operation: seed
│ Detail: Failed seed columns: name, email
└────────────────────────────────────────────────

Exports

// Default export (recommended)
import nanodb from 'nanodb-orm';

nanodb.createDatabase  // Main entry point
nanodb.transaction     // Atomic operations
nanodb.schema          // .table, .integer, .text, .real, .blob
nanodb.query           // .eq, .gte, .and, .or, .sql, .count, ...
nanodb.errors          // .DatabaseError, .parse
nanodb.cli             // .studio, .setup, .reset, .status, .validate

// Official plugins (separate packages)
import { timestamps } from '@nanodb-orm/plugin-timestamps';
import { softDelete } from '@nanodb-orm/plugin-soft-delete';
import { queryLogger } from '@nanodb-orm/plugin-logger';
import { authPlugin } from '@nanodb-orm/plugin-auth';
import { schemaGuard } from '@nanodb-orm/plugin-schema-guard';

// Types (named imports)
import { type SelectModel, type InsertModel, type NanoPlugin } from 'nanodb-orm';

nanodb-orm vs Drizzle + Turso (Direct)

| Feature | nanodb-orm | Drizzle + Turso | |---------|------------|-----------------| | Setup | One-liner: createDatabase({ tables }) | Manual: create client, drizzle, manage connection | | Migrations | Automatic on startup | Manual: drizzle-kit push/migrate | | Seeding | Built-in with seedData | Write seed scripts | | Type Safety | ✅ Full (same as Drizzle) | ✅ Full | | Query API | ✅ Same as Drizzle | ✅ Native Drizzle | | Plugins/Hooks | ✅ beforeInsert, afterQuery, etc. | ❌ None | | Schema Introspection | ✅ db.schema.tables() | ❌ Manual | | Health Checks | ✅ db.healthCheck() | ❌ Manual | | CLI | npx nanodb studio/status/validate | npx drizzle-kit studio only | | Error Parsing | User-friendly messages | Raw SQLite errors | | Connection | Auto-detects Turso vs local | Manual configuration |

When to Use What

| Use Case | Recommendation | |----------|----------------| | Quick prototyping | nanodb-orm | | Need plugins/hooks | nanodb-orm | | Want auto-migrations | nanodb-orm | | New SQLite/Turso project | nanodb-orm | | Maximum control | Drizzle directly | | Complex migration strategies | Drizzle + drizzle-kit | | Existing Drizzle project | Keep Drizzle |

Comparison

// nanodb-orm: 5 lines
import nanodb from 'nanodb-orm';

const users = nanodb.schema.table('users', { id: nanodb.schema.integer('id').primaryKey() });
const db = await nanodb.createDatabase({ tables: { users }, seedData: { users: [{ id: 1 }] } });
// Ready - tables created, seeded
// Drizzle + Turso: More setup
import { drizzle } from 'drizzle-orm/libsql';
import { createClient } from '@libsql/client';
import { sqliteTable, integer } from 'drizzle-orm/sqlite-core';
import { migrate } from 'drizzle-orm/libsql/migrator';

const users = sqliteTable('users', { id: integer('id').primaryKey() });
const client = createClient({ url: process.env.TURSO_CONNECTION_URL!, authToken: process.env.TURSO_AUTH_TOKEN! });
const db = drizzle(client);
await migrate(db, { migrationsFolder: './drizzle' });
await db.insert(users).values([{ id: 1 }]);

nanodb-orm is a convenience layer — it uses Drizzle under the hood and passes through all queries unchanged. You get Drizzle's full type safety plus automatic setup, plugins, and utilities.

License

MIT © Easy-Deploy-Dev