turbine-orm
v0.9.2
Published
Postgres-native TypeScript ORM — runs on Neon, Vercel Postgres, Cloudflare, Supabase. Streaming cursors, typed errors, single-query nested relations. 1 dependency, ~110KB
Maintainers
Readme
turbine-orm
Postgres ORM built for the edge. One runtime dependency. Built-in read-only Studio. Code-first and DB-first schema workflows in the same CLI. Runs on Neon, Vercel Postgres, Cloudflare Hyperdrive, Supabase, and any pg-compatible driver.
npm install turbine-ormWhy Turbine?
The elevator pitch: a Prisma-like DX without Prisma's engine, a Drizzle-class runtime footprint, and the only built-in read-only Studio in the TS ORM ecosystem. Four things bundled together that no other ORM bundles:
- One runtime dependency (
pg). No engine binary, no WASM adapter, no code-generation DSL. ~110 KB on npm. 5 KB on the edge entry. - First-class edge support.
turbineHttp(pool, schema)— one import swap — and the full API runs on Neon, Vercel Postgres, Cloudflare Hyperdrive, Supabase. No extra adapter package, no WASM bundle. - Built-in read-only Studio.
npx turbine studiospins up a loopback-bound, token-authed web UI withBEGIN READ ONLY+ statement-stacking guard. DBA-approvable. No separate install. - Code-first and DB-first in the same CLI.
defineSchema()in TypeScript ornpx turbine pullagainst a live database — same generated client, same migrations runner. Prisma forces the DSL; Drizzle is code-only.
Plus the architectural bits you'd expect: pipeline batching (N queries in one round-trip via the pg extended-query protocol), json_agg-based nested relation loading (same technique Drizzle and Prisma 7 use — no N+1), deep with type inference, streaming cursors, typed error hierarchy with isRetryable discriminants, middleware.
Benchmarks
Tested against Prisma 7.6 (adapter-pg, relationJoins preview on) and Drizzle 0.45 (relational queries) on a Neon PostgreSQL database (pooled endpoint, US-East, PostgreSQL 17.8). 100 iterations, 20 warmup, Node v22. Same schema, same data (1K users, 10K posts, 50K comments), same connection pool config.
| Scenario | Turbine | Prisma 7 | Drizzle v2 |
|---|---|---|---|
| findMany — 100 users (flat) | 51.97 ms | 52.90 ms | 53.51 ms |
| findMany — 50 users + posts (L2) | 55.84 ms | 56.10 ms | 88.80 ms |
| findMany — 10 users → posts → comments (L3) | 52.77 ms | 59.35 ms | 52.38 ms |
| findUnique — single user by PK | 47.66 ms | 52.15 ms | 47.78 ms |
| findUnique — user + posts + comments (L3) | 51.71 ms | 54.42 ms | 52.47 ms |
| count — all users | 44.57 ms | 47.54 ms | 46.75 ms |
| stream — iterate 50K rows (batch 1000) | 3,207 ms | 3,099 ms | 4,620 ms |
| atomic increment — view_count + 1 | 49.76 ms | 49.09 ms | 46.25 ms |
| pipeline — 5-query batch | 318 ms | 327 ms | 316 ms |
Against a real pooled database, most single-query scenarios are within noise — network round-trip to Neon is ~33–40 ms, which swamps per-query CPU overhead. But a few results stand out:
- L2 nested reads. Turbine and Prisma are neck-and-neck (~56 ms), while Drizzle is 1.59× slower (89 ms) on the 50-user + posts scenario. Turbine's
json_aggapproach and SQL template caching pay off here. - Streaming 50K rows. Turbine's optimized streaming (speculative first fetch + batch size 1000) matches Prisma at ~3.1–3.2 s. Drizzle's keyset pagination is 1.49× slower at 4.6 s. Turbine's cursor still gives you correctness on any
orderByand clean early-breaksemantics. - Pipeline batching puts 5 independent queries through a single round-trip using the Postgres extended-query pipeline protocol — all three ORMs are tied here since each runs 5 queries sequentially in a transaction.
Beyond the numbers, Turbine's real strengths are: one runtime dependency (pg, ~110 KB), a single import swap for edge runtimes (turbine-orm/serverless), typed Postgres errors with a readonly isRetryable const for retry loops, and the read-only Studio web UI that ships in the CLI — the only one in the TS ORM ecosystem that physically cannot mutate your database. And deep type inference through with clauses works end-to-end: write db.users.findMany({ with: { posts: { with: { comments: true } } } }) and users[0].posts[0].comments[0].body autocompletes — no manual assertion, no *With* helper interfaces.
Full analysis with p50/p95/p99 and methodology notes:
benchmarks/RESULTS.md. Reproduce:cd benchmarks && npm install && npx prisma generate && DATABASE_URL=... npx tsx bench.ts
Quick Start
# 1. Install
npm install turbine-orm
# 2. Initialize project
npx turbine init --url postgres://user:pass@localhost:5432/mydb
# 3. Generate typed client from your database
npx turbine generateWorks with both ESM and CommonJS:
// ESM
import { turbine } from './generated/turbine';
// CommonJS
const { turbine } = require('./generated/turbine');This introspects your database and generates a fully-typed client at ./generated/turbine/.
import { turbine } from './generated/turbine';
const db = turbine({ connectionString: process.env.DATABASE_URL });
// Type-safe queries with autocompletion
const users = await db.users.findMany({
where: { role: 'admin' },
orderBy: { createdAt: 'desc' },
limit: 10,
});
await db.disconnect();Usage Examples
findMany with nested relations
// Single query -- returns users with their posts and each post's comments
const users = await db.users.findMany({
where: { orgId: 1 },
with: {
posts: {
with: { comments: true },
orderBy: { createdAt: 'desc' },
limit: 5,
},
},
});
// users[0].posts[0].comments -- fully typed, single round-tripfindUnique
const user = await db.users.findUnique({
where: { id: 42 },
with: { posts: true },
});
// user.posts is Post[] -- resolved in the same querycreate
const newUser = await db.users.create({
data: {
email: '[email protected]',
name: 'Alice',
orgId: 1,
},
});
// Returns the full row with generated id, createdAt, etc.createMany (batch insert with UNNEST)
const users = await db.users.createMany({
data: [
{ email: '[email protected]', name: 'A', orgId: 1 },
{ email: '[email protected]', name: 'B', orgId: 1 },
{ email: '[email protected]', name: 'C', orgId: 1 },
],
});
// Single INSERT with UNNEST -- not 3 separate insertsupdate / delete
const updated = await db.users.update({
where: { id: 42 },
data: { name: 'Alice Updated' },
});
const deleted = await db.users.delete({
where: { id: 42 },
});Atomic update operators
For race-free counter updates, pass an operator object instead of a literal. Turbine generates col = col + $n style SQL so concurrent updates are safe.
// Atomic increment — no read-modify-write race
await db.posts.update({
where: { id: 1 },
data: { viewCount: { increment: 1 } },
});
// Other supported operators on numeric columns
await db.posts.update({
where: { id: 1 },
data: {
viewCount: { increment: 5 },
likesCount: { decrement: 1 },
score: { multiply: 2 },
rank: { divide: 2 },
title: { set: 'New title' }, // explicit set, equivalent to a literal
},
});Transactions
await db.$transaction(async (tx) => {
const user = await tx.users.create({
data: { email: '[email protected]', name: 'New', orgId: 1 },
});
await tx.posts.create({
data: { userId: user.id, orgId: 1, title: 'First Post', content: '...' },
});
});
// Fully typed -- tx.users and tx.posts have the same API as db.users and db.postsPipeline (batch queries in one round-trip)
const [user, postCount, recentPosts] = await db.pipeline(
db.users.buildFindUnique({ where: { id: 1 } }),
db.posts.buildCount({ where: { orgId: 1 } }),
db.posts.buildFindMany({ where: { userId: 1 }, limit: 5 }),
);
// 3 queries, 1 database round-tripRaw SQL (tagged template)
const stats = await db.raw<{ day: Date; count: number }>`
SELECT DATE_TRUNC('day', created_at) AS day, COUNT(*)::int AS count
FROM posts WHERE org_id = ${orgId}
GROUP BY day ORDER BY day
`;Case-insensitive search
const users = await db.users.findMany({
where: {
email: { contains: 'alice', mode: 'insensitive' },
},
});
// Generates: WHERE email ILIKE '%alice%'Streaming large result sets
// Stream rows using PostgreSQL cursors — constant memory, no matter how many rows
for await (const user of db.users.findManyStream({
where: { orgId: 1 },
batchSize: 500, // internal FETCH batch size (default: 1000)
orderBy: { id: 'asc' },
with: { posts: true }, // nested relations work too
})) {
process.stdout.write(`${user.email}\n`);
}Uses DECLARE CURSOR under the hood — rows are fetched in batches on a dedicated connection, parsed individually, and yielded via AsyncGenerator. Safe to break early; the cursor and connection are cleaned up automatically.
Query timeout
const users = await db.users.findMany({
where: { orgId: 1 },
timeout: 5000, // 5 second timeout
});Default limit
// Set a default limit for all queries on a model
const db = turbine({
connectionString: process.env.DATABASE_URL,
defaultLimit: 100,
});Middleware
// Query timing
db.$use(async (params, next) => {
const start = Date.now();
const result = await next(params);
console.log(`${params.model}.${params.action} took ${Date.now() - start}ms`);
return result;
});
// Soft-delete filter
db.$use(async (params, next) => {
if (params.action === 'findMany' || params.action === 'findUnique') {
params.args.where = { ...params.args.where, deletedAt: null };
}
return next(params);
});Error handling
Turbine throws typed errors you can catch programmatically:
import { NotFoundError, ValidationError, TimeoutError } from 'turbine-orm';
try {
const user = await db.users.findUniqueOrThrow({ where: { id: 999 } });
} catch (err) {
if (err instanceof NotFoundError) {
// err.code === 'TURBINE_E001'
console.log('User not found');
} else if (err instanceof TimeoutError) {
// err.code === 'TURBINE_E002'
console.log('Query timed out');
} else if (err instanceof ValidationError) {
// err.code === 'TURBINE_E003'
console.log('Invalid query:', err.message);
}
}Error codes: TURBINE_E001 (NotFound), TURBINE_E002 (Timeout), TURBINE_E003 (Validation), TURBINE_E004 (Connection), TURBINE_E005 (Relation), TURBINE_E006 (Migration), TURBINE_E007 (CircularRelation), TURBINE_E008 (UniqueConstraint), TURBINE_E009 (ForeignKey), TURBINE_E010 (NotNullViolation), TURBINE_E011 (CheckConstraint), TURBINE_E012 (Deadlock), TURBINE_E013 (SerializationFailure), TURBINE_E014 (Pipeline).
WHERE Operator Reference
Every operator supported by the where clause. Operators compose freely with AND, OR, NOT, and the relation filters some / every / none.
Equality
| Operator | Description | Example |
|---|---|---|
| literal | Implicit equality | where: { email: '[email protected]' } |
| equals | Explicit equality | where: { email: { equals: '[email protected]' } } |
| not | Inequality (or not: null for IS NOT NULL) | where: { role: { not: 'admin' } } |
Sets
| Operator | Description | Example |
|---|---|---|
| in | Match any value in the array | where: { id: { in: [1, 2, 3] } } |
| notIn | Match none of the values in the array | where: { role: { notIn: ['banned', 'spam'] } } |
Comparison
| Operator | Description | Example |
|---|---|---|
| gt | Greater than | where: { score: { gt: 100 } } |
| gte | Greater than or equal | where: { score: { gte: 100 } } |
| lt | Less than | where: { score: { lt: 100 } } |
| lte | Less than or equal | where: { score: { lte: 100 } } |
String
| Operator | Description | Example |
|---|---|---|
| contains | Substring match (LIKE %v%) | where: { title: { contains: 'sql' } } |
| startsWith | Prefix match (LIKE v%) | where: { email: { startsWith: 'admin@' } } |
| endsWith | Suffix match (LIKE %v) | where: { email: { endsWith: '@acme.com' } } |
| mode: 'insensitive' | Switch any string operator to ILIKE | where: { title: { contains: 'SQL', mode: 'insensitive' } } |
LIKE wildcards in user input are escaped automatically — %, _, and \ are treated as literals.
Relation filters
Filter parent rows by predicates against their related child rows. Available on hasMany and hasOne relations.
| Operator | Description | Example |
|---|---|---|
| some | At least one related row matches | where: { posts: { some: { published: true } } } |
| every | Every related row matches | where: { posts: { every: { published: true } } } |
| none | No related row matches | where: { posts: { none: { published: false } } } |
Array columns
Operators for Postgres array columns (text[], int[], etc.).
| Operator | Description | Example |
|---|---|---|
| has | Array contains the given element | where: { tags: { has: 'sql' } } |
| hasEvery | Array contains every element in the list | where: { tags: { hasEvery: ['sql', 'postgres'] } } |
| hasSome | Array contains at least one element from the list | where: { tags: { hasSome: ['sql', 'mysql'] } } |
Combinators
| Operator | Description | Example |
|---|---|---|
| AND | All sub-clauses must match | where: { AND: [{ orgId: 1 }, { role: 'admin' }] } |
| OR | Any sub-clause matches | where: { OR: [{ role: 'admin' }, { role: 'owner' }] } |
| NOT | Negate a sub-clause | where: { NOT: { role: 'banned' } } |
CLI
npx turbine <command> [options]
Commands:
init Initialize a Turbine project (creates config, dirs, templates)
generate | pull Introspect database and generate TypeScript types + client
push Apply schema-builder definitions to database
migrate create <name> Create a new SQL migration file
migrate create <name> --auto Auto-generate from schema diff
migrate up Apply pending migrations
migrate down Rollback last migration
migrate status Show applied/pending migrations
seed Run seed file
status Show database schema summary
studio Launch local read-only Studio web UI
Options:
--url, -u <url> Postgres connection string
--out, -o <dir> Output directory (default: ./generated/turbine)
--schema, -s <name> Postgres schema (default: public)
--auto Auto-generate migration from schema diff
--dry-run Show SQL without executing
--verbose, -v Detailed outputSchema-first workflow
Define your schema in TypeScript and push it to the database:
// turbine/schema.ts
import { defineSchema } from 'turbine-orm';
export default defineSchema({
users: {
id: { type: 'serial', primaryKey: true },
email: { type: 'text', unique: true, notNull: true },
name: { type: 'text', notNull: true },
orgId: { type: 'bigint', notNull: true, references: 'organizations.id' },
createdAt: { type: 'timestamp', default: 'now()' },
},
});npx turbine push --dry-run # Preview SQL
npx turbine push # Apply to database
npx turbine generate # Regenerate typed clientMigration workflow
# Create a blank migration (write SQL manually)
npx turbine migrate create add_users_table
# Auto-generate migration from schema diff (compares defineSchema() vs live DB)
npx turbine migrate create add_email_index --auto
# -> Generates UP (ALTER/CREATE) and DOWN (reverse) SQL automatically
# Apply all pending migrations
npx turbine migrate up
# Rollback the last applied migration
npx turbine migrate down
# Check migration status (applied vs pending)
npx turbine migrate statusStudio
The only Postgres ORM with a Studio your DBA will approve. turbine studio launches a local, read-only web UI for exploring your database — no mutations, no writes, no way around the transaction guard.
DATABASE_URL=postgres://user:pass@localhost:5432/mydb npx turbine studio
# With flags
npx turbine studio --port 5173 --host 127.0.0.1 --no-openFeatures
- Data / Schema / SQL / Builder tabs. Browse rows, inspect tables and relations, run ad-hoc
SELECTs, or compose queries visually with a live TypeScript preview. - Saved queries. Named SQL snippets persisted to
.turbine/studio-queries.json— share them across runs without committing them. - Cmd+K command palette. Jump to any table, tab, or saved query in one keystroke.
- Full-text search across rows. The Data tab supports substring search across every text column of the current table.
- Visual query composer. The Builder tab lets you click together
where/orderBy/with/limitclauses and renders the matchingdb.table.findMany(...)TypeScript in real time — copy it into your codebase.
Security posture (read-only by design)
- Loopback by default (
127.0.0.1) with a loud warning if you bind to a non-loopback address. - Per-process auth token — 24 random bytes of hex, stored in a
SameSite=StrictHttpOnlycookie. - Every query runs inside
BEGIN READ ONLY+SET LOCAL statement_timeout = '30s'. Writes are physically impossible at the transaction level. - SELECT/WITH-only SQL parser strips comments and rejects non-trailing semicolons, blocking statement-stacking attacks.
- Security headers on every response —
X-Content-Type-Options,X-Frame-Options: DENY,Referrer-Policy: no-referrer.
Serverless / Edge
Turbine's core is driver-agnostic: pass any pg-compatible pool to TurbineConfig.pool (or use the turbineHttp() factory) and Turbine runs on Vercel Edge, Cloudflare Workers, Deno Deploy, Netlify Edge, or any other environment where a direct TCP connection is unavailable. No new dependencies — install whichever driver you already use.
Neon Serverless (HTTP / WebSocket)
// app/api/users/route.ts
import { Pool } from '@neondatabase/serverless';
import { turbineHttp } from 'turbine-orm/serverless';
import { schema } from '@/generated/turbine/metadata';
export const runtime = 'edge';
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const db = turbineHttp(pool, schema);
export async function GET() {
const users = await db.table('users').findMany({
with: { posts: { with: { comments: true } } },
limit: 10,
});
return Response.json(users);
}Vercel Postgres
import { createPool } from '@vercel/postgres';
import { turbineHttp } from 'turbine-orm/serverless';
import { schema } from './generated/turbine/metadata.js';
const pool = createPool({ connectionString: process.env.POSTGRES_URL });
const db = turbineHttp(pool, schema);Supabase (direct Postgres — no HTTP proxy needed)
import { TurbineClient } from 'turbine-orm';
import { schema } from './generated/turbine/metadata.js';
const db = new TurbineClient({
connectionString: process.env.SUPABASE_DB_URL,
ssl: { rejectUnauthorized: false },
}, schema);Cloudflare Workers
import { Pool } from '@neondatabase/serverless';
import { turbineHttp } from 'turbine-orm/serverless';
import { schema } from './generated/turbine/metadata';
export default {
async fetch(req: Request, env: Env) {
const pool = new Pool({ connectionString: env.DATABASE_URL });
const db = turbineHttp(pool, schema);
const users = await db.table('users').findMany({ limit: 10 });
return Response.json(users);
},
};Limitations on HTTP drivers
- Streaming cursors (
findManyStream) requireDECLARE CURSOR, which most HTTP drivers don't support. UsefindManywithlimit+ pagination instead. - LISTEN/NOTIFY is not available over HTTP.
- Transactions work but hold an HTTP connection for their duration — keep them short.
When Turbine receives an external pool, db.disconnect() is a no-op: the caller owns the pool's lifecycle.
Configuration
Create turbine.config.ts in your project root (or run npx turbine init):
import type { TurbineCliConfig } from 'turbine-orm/cli';
const config: TurbineCliConfig = {
url: process.env.DATABASE_URL,
out: './generated/turbine',
schema: 'public',
migrationsDir: './turbine/migrations',
seedFile: './turbine/seed.ts',
schemaFile: './turbine/schema.ts',
};
export default config;Priority order: CLI flags > environment variables (DATABASE_URL) > config file > defaults.
How It Works
Turbine resolves the entire object graph in a single database round-trip, regardless of nesting depth. The runtime nests relations for you via json_agg — one round-trip, no N+1, no client-side stitching. And the with clause is fully type-inferred: the generator emits branded *Relations interfaces with RelationDescriptor phantom fields, and a recursive WithResult<T, R, W> conditional type walks an arbitrarily-deep with literal to produce the exact nested return shape. Write db.users.findMany({ with: { posts: { with: { comments: { with: { author: true } } } } } }) and users[0].posts[0].comments[0].author.name autocompletes — no manual assertion, no *With* helper annotation.
Prisma 7+ and Drizzle v2 also do single-query nested loads. Turbine's advantage isn't query latency (see Benchmarks — all three are within noise over a real pooled database); it's architectural simplicity plus the read-only Studio. One runtime dependency (pg), no DSL compiler, no driver adapter shim for edge, and the only TS ORM Studio your DBA will approve.
Type Mapping
Turbine maps Postgres types to TypeScript:
| Postgres | TypeScript | Notes |
|---|---|---|
| int2, int4, float4, float8 | number | Standard numeric types |
| int8 / bigint | number | Values > Number.MAX_SAFE_INTEGER (2^53 - 1) are returned as string at runtime to avoid precision loss. This affects < 0.01% of use cases (auto-increment IDs, counts, etc. are all safe). |
| numeric, money | string | Arbitrary precision — kept as string to avoid JS float issues |
| text, varchar, uuid, citext | string | |
| timestamptz, timestamp, date | Date | |
| boolean | boolean | |
| json, jsonb | unknown | |
| bytea | Buffer | |
| Array types | T[] | e.g. _text → string[] |
Comparison
| | Turbine | Prisma | Drizzle | Kysely |
|---|---|---|---|---|
| Nested relations | 1 query, deep type inference | 1 query (since v5.8), shallow inference | 1 query, requires relations() re-declaration | Manual (jsonArrayFrom) |
| API style | findMany, with | findMany, include | SQL-like + relational | SQL builder |
| Schema | TypeScript | Custom DSL (.prisma) | TypeScript | Manual interfaces |
| Runtime deps | 1 (pg) | @prisma/client + adapter | 0 | 0 |
| Multi-DB | PostgreSQL only | PG, MySQL, SQLite, MSSQL | PG, MySQL, SQLite | PG, MySQL, SQLite |
| Code generation | turbine generate | prisma generate | Not needed | Not needed |
All three ORMs now do single-query nested loads. Over a real pooled database (Neon, US-East) most single-query scenarios land within noise — but Turbine's SQL template caching and prepared statements give it a consistent edge, particularly on L2 nested reads (1.59× faster than Drizzle) and streaming (at parity with Prisma, 1.49× faster than Drizzle). Turbine's differentiators are both architectural and performance: one runtime dependency, one import swap for edge, typed errors with isRetryable, deep with type inference, and real Postgres pipeline protocol support. See Benchmarks and benchmarks/RESULTS.md for the full breakdown.
Limitations
Turbine is focused and opinionated. Here's what it doesn't do:
- PostgreSQL only. No MySQL, SQLite, or MSSQL. By design — going deep on one database enables the performance advantage and the edge-runtime story.
- No full-text search operators. TSVECTOR/TSQUERY are not exposed in the query builder. Use
db.rawfor full-text queries. - Large nested result sets. Nested results are materialized server-side in PostgreSQL memory. For relations with 10K+ rows, always use
limitin yourwithclause — or stream the parents withfindManyStreamand resolve children per-row.
Examples
Feature demos
- Thread Machine — HN clone rendered from a single
findMany. 4-level object graph (stories → comments → replies → author), every property autocompletes through the chain - Streaming CSV — Export 100K orders + line items to CSV with constant memory. PostgreSQL cursors, live heap meter, nested
withinsidefindManyStream - Clickstorm — Side-by-side atomic-increment vs read-modify-write load test. 10K concurrent clicks. The atomic path wins every time
Runtime targets
- Next.js — Server-rendered app with nested relations, streaming, and live code demos
- Neon Edge — Vercel Edge route handler talking to Neon over HTTP via
@neondatabase/serverless - Vercel Postgres — Next.js app router route handler on
@vercel/postgres - Cloudflare Worker — Worker
fetchhandler withpgover Cloudflare Hyperdrive - Supabase — Standalone script over the standard
pgdriver against Supabase
Guides
- Migrating from Prisma — API mapping table, side-by-side
findMany, and notes on the differences
Requirements
- Node.js >= 18.0.0
- PostgreSQL >= 14
- Works with both ESM (
import) and CommonJS (require)
License
MIT
