prisma-sql
v1.83.0
Published
Convert Prisma queries to optimized SQL with type safety. 2-7x faster than Prisma Client.
Maintainers
Readme
prisma-sql
Prerender Prisma queries to SQL and execute them directly via postgres.js or better-sqlite3.
Same Prisma API. Same Prisma types. Lower read overhead.
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const basePrisma = new PrismaClient()
export const prisma = basePrisma.$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof basePrisma>
const users = await prisma.user.findMany({
where: { status: 'ACTIVE' },
include: { posts: true },
})
const dashboard = await prisma.$batch((batch) => ({
activeUsers: batch.user.count({ where: { status: 'ACTIVE' } }),
recentPosts: batch.post.findMany({
take: 10,
orderBy: { createdAt: 'desc' },
}),
taskStats: batch.task.aggregate({
_count: true,
_avg: { estimatedHours: true },
}),
}))What it does
prisma-sql accelerates Prisma read queries by skipping Prisma's read execution path and running generated SQL directly through a database-native client.
It keeps the Prisma client for:
- schema and migrations
- generated types
- writes
- fallback for unsupported cases
It accelerates:
findManyfindFirstfindUniquecountaggregategroupBy- PostgreSQL
$batch
Why use it
Prisma's DX is excellent, but read queries still pay runtime overhead for query-engine planning, validation, transformation, and result mapping.
prisma-sql moves that work out of the hot path:
- builds SQL from Prisma-style query args
- can prebake hot queries at generate time
- executes via
postgres.jsorbetter-sqlite3 - maps results back to Prisma-like shapes
The goal is simple:
- keep Prisma's developer experience
- cut read-path overhead
- stay compatible with existing Prisma code
Installation
PostgreSQL
npm install prisma-sql postgresSQLite
npm install prisma-sql better-sqlite3Quick start
1) Add the generator
generator client {
provider = "prisma-client"
}
generator sql {
provider = "prisma-sql-generator"
}
model User {
id Int @id @default(autoincrement())
email String @unique
status String
posts Post[]
}
model Post {
id Int @id @default(autoincrement())
title String
authorId Int
author User @relation(fields: [authorId], references: [id])
}2) Generate
npx prisma generateThis generates ./generated/sql/index.ts.
3) Extend Prisma
PostgreSQL
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const basePrisma = new PrismaClient()
export const prisma = basePrisma.$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof basePrisma>SQLite
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import Database from 'better-sqlite3'
const db = new Database('./data.db')
const basePrisma = new PrismaClient()
export const prisma = basePrisma.$extends(
speedExtension({ sqlite: db }),
) as SpeedClient<typeof basePrisma>With existing Prisma extensions
Apply speedExtension last so it sees the final query surface.
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const basePrisma = new PrismaClient()
const extendedPrisma = basePrisma
.$extends(myCustomExtension)
.$extends(anotherExtension)
export const prisma = extendedPrisma.$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof extendedPrisma>Supported queries
Accelerated
findManyfindFirstfindUniquecountaggregategroupBy$batchfor PostgreSQL
Not accelerated
These continue to run through Prisma:
createupdatedeleteupsertcreateManyupdateManydeleteMany
Fallback behavior
If a query shape is unsupported or cannot be accelerated safely, the extension falls back to Prisma instead of returning incorrect results.
Enable debug: true to see generated SQL and fallback behavior.
Features
1) Runtime SQL generation
Any supported read query can be converted from Prisma args into SQL at runtime.
const users = await prisma.user.findMany({
where: {
status: 'ACTIVE',
email: { contains: '@example.com' },
},
orderBy: { createdAt: 'desc' },
take: 20,
})2) Prebaked hot queries with @optimize
For the hottest query shapes, you can prebake SQL at generate time.
/// @optimize {
/// "method": "findMany",
/// "query": {
/// "where": { "status": "ACTIVE" },
/// "orderBy": { "createdAt": "desc" },
/// "skip": "$skip",
/// "take": "$take"
/// }
/// }
model User {
id Int @id @default(autoincrement())
email String @unique
status String
createdAt DateTime @default(now())
}At runtime:
- matching query shape → prebaked SQL
- non-matching query shape → runtime SQL generation
3) PostgreSQL batch queries
$batch combines multiple independent read queries into one round trip.
const results = await prisma.$batch((batch) => ({
users: batch.user.findMany({ where: { status: 'ACTIVE' } }),
posts: batch.post.count(),
stats: batch.task.aggregate({ _count: true }),
}))4) Include and relation reduction
For supported include trees, prisma-sql can execute flat SQL and reduce rows back into Prisma-like nested results.
5) Aggregate result type handling
Aggregates are mapped back to Prisma-style value types instead of flattening everything into strings or plain numbers.
That includes preserving types like:
DecimalBigIntDateTime_count
Query examples
Filters
{ age: { gt: 18, lte: 65 } }
{ status: { in: ['ACTIVE', 'PENDING'] } }
{ status: { notIn: ['DELETED'] } }
{ email: { contains: '@example.com' } }
{ email: { startsWith: 'user' } }
{ email: { endsWith: '.com' } }
{ email: { contains: 'EXAMPLE', mode: 'insensitive' } }
{ AND: [{ status: 'ACTIVE' }, { verified: true }] }
{ OR: [{ role: 'ADMIN' }, { role: 'MODERATOR' }] }
{ NOT: { status: 'DELETED' } }
{ deletedAt: null }
{ deletedAt: { not: null } }Relations
{
include: {
posts: true,
profile: true,
}
}{
include: {
posts: {
where: { published: true },
orderBy: { createdAt: 'desc' },
take: 5,
include: {
comments: true,
},
},
},
}{
where: {
posts: { some: { published: true } },
},
}{
where: {
posts: { every: { published: true } },
},
}{
where: {
posts: { none: { published: false } },
},
}Configuration
prisma-sql exposes every internal limit and strategy constant so you can tune the query builder for your schema without forking the library.
Extension Config
Pass limits and/or strategy directly to speedExtension:
import { speedExtension } from './generated/sql'
const db = prisma.$extends(
speedExtension({
postgres: sql,
limits: {
MAX_INCLUDES_PER_LEVEL: 20,
MAX_INCLUDE_DEPTH: 8,
},
strategy: {
defaultFanOut: 5,
roundtripRowEquivalent: 40,
},
}),
)Both fields accept partial objects — only the keys you provide are overwritten; everything else stays at its default.
Programmatic API
If you use prisma-sql as a library (without the generated extension), the same knobs are available as standalone functions:
import {
setLimits,
getLimits,
resetLimits,
setStrategyConfig,
getStrategyConfig,
rebuildQueryCache,
} from 'prisma-sql'
setLimits({ MAX_INCLUDES_PER_LEVEL: 25 })
rebuildQueryCache() // required after changing QUERY_CACHE_SIZE
setStrategyConfig({ correlatedBoundedFactor: 0.3 })
console.log(getLimits())
console.log(getStrategyConfig())
resetLimits() // restore all defaultsNote:
rebuildQueryCache()must be called after changingQUERY_CACHE_SIZE. All other limit changes take effect immediately on the next query.
Query Builder Limits
These control safety boundaries and complexity caps for generated SQL.
| Key | Default | Description |
| ------------------------------ | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------- |
| MAX_INCLUDE_DEPTH | 5 | Max depth of nested include/select relations. Increase for deeply nested schemas. |
| MAX_INCLUDES_PER_LEVEL | 10 | Max number of relations included at a single nesting level. Increase for wide schemas with many relations. |
| MAX_TOTAL_SUBQUERIES | 100 | Total correlated subqueries allowed across the entire query tree. Guards against exponential blowup from deep × wide includes. |
| MAX_SELF_REFERENTIAL_DEPTH | 2 | Max times a model can appear in its own include chain. Controls tree/graph self-referencing models like Category → children → children. |
| MAX_QUERY_DEPTH | 50 | Max nesting depth for WHERE clauses (AND/OR/NOT). |
| MAX_NOT_DEPTH | 50 | Max nesting depth for NOT operator composition in both WHERE and HAVING. |
| MAX_HAVING_DEPTH | 50 | Max nesting depth for HAVING clause. |
| MAX_NESTED_JOIN_DEPTH | 10 | Max depth for flat-join and lateral-join relation traversal. |
| MAX_RELATION_ORDER_BY_DEPTH | 10 | Max depth for relation-based orderBy resolution (e.g. orderBy: { author: { name: 'asc' } }). |
| JOIN_INCLUDE_MAX_DEPTH | 0 | Max depth at which join-based include strategy is allowed. 0 means top-level only. |
| MAX_WHERE_IN_RECURSIVE_DEPTH | 10 | Max recursion depth for where-in segment resolution. |
| MAX_ARRAY_SIZE | 10000 | Max elements in array params (in, hasSome, etc). |
| MAX_STRING_LENGTH | 10000 | Max string length for LIKE and JSON string operators. |
| MAX_LIMIT_OFFSET | 2147483647 | PostgreSQL engine limit for LIMIT/OFFSET. |
| MIN_NEGATIVE_TAKE | -10000 | Minimum allowed negative take value. |
| MAX_ALIAS_COUNTER_THRESHOLD | 1000 | Safety threshold before alias counter overflow. |
| QUERY_CACHE_SIZE | 1000 | Max entries in the SQL query cache. Call rebuildQueryCache() after changing. |
| STMT_CACHE_SIZE | 1000 | Max entries in the SQLite prepared statement cache per client. |
Example — wide schema with 15+ relations per model:
speedExtension({
postgres: sql,
limits: {
MAX_INCLUDES_PER_LEVEL: 20,
MAX_TOTAL_SUBQUERIES: 200,
},
})Example — deeply nested tree structure (5+ levels):
speedExtension({
postgres: sql,
limits: {
MAX_INCLUDE_DEPTH: 8,
MAX_SELF_REFERENTIAL_DEPTH: 5,
},
})Strategy Cost-Model Parameters
These control how prisma-sql chooses between query strategies (flat-join, where-in, correlated subqueries). Tuning these is optional — the defaults are calibrated from benchmarks — but can help if your database has unusual latency characteristics or data distributions.
| Key | Default | Description |
| ------------------------------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| roundtripRowEquivalent | 73 | Cost (in row-equivalents) of one additional database roundtrip. Lower values favor multi-roundtrip strategies like where-in. Higher values favor single-query strategies. |
| jsonRowFactor | 1.5 | Multiplier for JSON aggregation overhead per row. Affects flat-join cost estimation. |
| correlatedBoundedFactor | 0.5 | Cost factor for correlated subqueries when the child has a LIMIT. Lower = cheaper = more likely to pick correlated. |
| correlatedUnboundedFactor | 3.0 | Cost factor for correlated subqueries when the child is unbounded. Higher = more expensive = favors where-in. |
| correlatedWherePenalty | 3.0 | Extra cost multiplier applied when a child relation has a WHERE clause inside correlated subqueries. |
| defaultFanOut | 10 | Assumed average children per parent when no relation statistics are available. |
| defaultParentCount | 50 | Assumed parent row count when take is not specified on the root query. |
| singleParentMaxFlatJoinDepth | 2 | Max include depth that allows the flat-join strategy for findFirst/findUnique. |
| minStatsCoverage | 0.1 | Minimum stats coverage (0–1) to trust collected relation cardinality data. Below this, defaultFanOut is used. |
| dynamicTakeEstimate | 10 | Assumed take value when the actual value is a runtime dynamic parameter. |
Example — low-latency local database (roundtrips are cheap):
speedExtension({
postgres: sql,
strategy: {
roundtripRowEquivalent: 20,
},
})Example — remote database with high latency (minimize roundtrips):
speedExtension({
postgres: sql,
strategy: {
roundtripRowEquivalent: 200,
},
})Example — schema with sparse relations (most parents have 0–2 children):
speedExtension({
postgres: sql,
strategy: {
defaultFanOut: 2,
},
})Pagination and ordering
{
take: 10,
skip: 20,
orderBy: { createdAt: 'desc' },
}{
cursor: { id: 100 },
skip: 1,
take: 10,
orderBy: { id: 'asc' },
}{
orderBy: [
{ status: 'asc' },
{ priority: 'desc' },
{ createdAt: 'desc' },
],
}Composite cursor pagination
For composite cursors, use an orderBy that starts with the cursor fields in the same order.
{
cursor: { tenantId: 10, id: 500 },
skip: 1,
take: 20,
orderBy: [
{ tenantId: 'asc' },
{ id: 'asc' },
],
}This matches keyset pagination expectations and avoids unstable page boundaries.
Aggregates
await prisma.user.count({
where: { status: 'ACTIVE' },
})await prisma.task.aggregate({
where: { status: 'DONE' },
_count: { _all: true },
_sum: { estimatedHours: true },
_avg: { estimatedHours: true },
_min: { startedAt: true },
_max: { completedAt: true },
})await prisma.task.groupBy({
by: ['status', 'priority'],
_count: { _all: true },
_avg: { estimatedHours: true },
having: {
status: {
_count: { gte: 5 },
},
},
})Cardinality planner
The cardinality planner is the piece that decides how relation-heavy reads should be executed for best performance.
In practice, it helps choose between strategies such as:
- direct joins
- lateral/subquery-style fetches
- flat row expansion + reducer
- segmented follow-up loading for high fan-out relations
This matters because the fastest strategy depends on cardinality, not just query shape.
A profile include behaves very differently from a posts.comments.likes include.
Why it matters
A naive join strategy can explode row counts:
User -> Profileis usually low fan-outUser -> Posts -> Commentscan multiply rows aggressivelyOrganization -> Users -> Sessions -> Eventscan become huge very quickly
The planner tries to keep read amplification under control.
Best setup for the planner
To get the best results, prepare your schema and indexes so the planner can make good choices.
1) Model real cardinality accurately
Use correct relation fields and uniqueness constraints.
Good examples:
model User {
id Int @id @default(autoincrement())
profile Profile?
posts Post[]
}
model Profile {
id Int @id @default(autoincrement())
userId Int @unique
user User @relation(fields: [userId], references: [id])
}
model Post {
id Int @id @default(autoincrement())
authorId Int
author User @relation(fields: [authorId], references: [id])
@@index([authorId])
}Why this helps:
@uniqueon one-to-one foreign keys tells the planner the relation is bounded- indexes on one-to-many foreign keys make follow-up or segmented loading cheap
2) Index every foreign key used in includes and relation filters
At minimum, index:
- all
@relation(fields: [...])foreign keys on the child side - fields used in nested
where - fields used in nested
orderBy - fields used in cursor pagination
Example:
model Comment {
id Int @id @default(autoincrement())
postId Int
createdAt DateTime @default(now())
published Boolean @default(false)
post Post @relation(fields: [postId], references: [id])
@@index([postId])
@@index([postId, createdAt])
@@index([postId, published])
}3) Prefer deterministic nested ordering
When including collections, always provide a stable order when practical.
const users = await prisma.user.findMany({
include: {
posts: {
orderBy: { createdAt: 'desc' },
take: 5,
},
},
})That helps both the planner and the reducer keep result shapes predictable.
What to configure
Use the cardinality planner wherever your generator/runtime exposes it.
Because config names can differ between versions, the safe rule is:
- enable the planner in generator/runtime config if your build exposes that switch
- keep it on for relation-heavy workloads
- tune any thresholds only after measuring with real production-shaped queries
If your project has planner thresholds, start conservatively:
- prefer bounded strategies for one-to-one and unique includes
- prefer segmented or reduced strategies for one-to-many and many-to-many
- lower thresholds for deep includes with large child tables
- raise thresholds only after verifying lower fan-out in production data
How to verify the planner is helping
Use debug and onQuery.
Look for:
- large latency spikes on include-heavy queries
- unusually large result sets for a small parent page
- repeated slow nested includes on high-fanout relations
const prisma = basePrisma.$extends(
speedExtension({
postgres: sql,
debug: true,
onQuery: (info) => {
console.log(`${info.model}.${info.method} ${info.duration}ms`)
console.log(info.sql)
},
}),
) as SpeedClient<typeof basePrisma>What good results look like:
- small parent page stays small in latency
- bounded child includes remain predictable
- high-fanout includes stop exploding row counts
- moving a heavy include into
$batchor splitting it improves latency materially
Deployment without database access at build time
The cardinality planner collects relation statistics and roundtrip cost measurements directly from the database during prisma generate. In CI/CD pipelines or containerized builds, the database is often unreachable.
Skip planner during generation
Set PRISMA_SQL_SKIP_PLANNER=true to skip stats collection at generate time. The generator will emit default planner values instead.
PRISMA_SQL_SKIP_PLANNER=true npx prisma generateCollect stats at runtime
Run prisma-sql-collect-stats as a pre-start step or background job, after deployment, when the database is reachable.
prisma-sql-collect-stats \
--output dist/prisma/generated/sql/planner.generated.js \
--prisma-client dist/prisma/generated/client/index.js| Flag | Default | Description |
| ----------------- | -------------------------------------------------- | -------------------------------------------------------------- |
| --output | ./dist/prisma/generated/sql/planner.generated.js | Path to the generated planner module |
| --prisma-client | @prisma/client | Path to the compiled Prisma client (must expose Prisma.dmmf) |
The script reads DATABASE_URL from the environment (supports .env via dotenv). If the connection fails or times out, it exits silently without blocking startup.
Load stats at runtime
Use loadExternalPlannerStats to load planner stats from an external file at runtime. This is useful when the stats file is stored outside the generated SQL directory, for example on a persistent volume that survives redeployments.
import { loadExternalPlannerStats } from 'prisma-sql'
const loaded = loadExternalPlannerStats(
'/data/planner-stats/planner.generated.js',
)
if (loaded) {
console.log('Planner stats loaded from volume')
}This applies RELATION_STATS, ROUNDTRIP_ROW_EQUIVALENT, and JSON_ROW_FACTOR to the global strategy estimator, overriding whatever was baked into the generated code at build time. Returns true on success, false if the file doesn't exist or can't be parsed.
Incremental collection
The collector supports incremental mode. When an output file already exists, it reads previous results and can skip work that doesn't need repeating.
Freshness check: If the existing file was written less than PRISMA_SQL_STATS_MAX_AGE_MS ago (default 24 hours), the collector exits immediately.
Fast mode: By default, the collector uses PostgreSQL catalog statistics (pg_stats) instead of running full GROUP BY + PERCENTILE_CONT queries per relation. This completes in seconds instead of minutes. If catalog stats appear stale (all relations show trivial cardinality), it falls back to precise mode automatically.
Slow edge skip: Each relation edge's collection time is recorded. On subsequent runs, edges that previously exceeded PRISMA_SQL_SLOW_EDGE_MS (default 10 seconds) are skipped and their previous stats are reused. This prevents a single large table from blocking the entire collection. Skipped edges are re-measured after PRISMA_SQL_STALE_EDGE_HOURS (default 168 hours / 7 days).
Per-edge timeout: Individual edge queries are bounded by PRISMA_SQL_EDGE_TIMEOUT_MS (default 30 seconds). On timeout, the collector falls back to previous stats or conservative defaults.
Environment variables
| Variable | Default | Description |
| ------------------------------- | ---------------- | ----------------------------------------------------------- |
| PRISMA_SQL_SKIP_PLANNER | false | Skip stats collection entirely during prisma generate |
| PRISMA_SQL_STATS_MAX_AGE_MS | 86400000 (24h) | Skip collection if existing stats are younger than this |
| PRISMA_SQL_STATS_MODE | fast | fast uses pg_stats catalog, precise uses full queries |
| PRISMA_SQL_SLOW_EDGE_MS | 10000 (10s) | Reuse cached stats for edges slower than this |
| PRISMA_SQL_EDGE_TIMEOUT_MS | 30000 (30s) | Abort individual edge query after this |
| PRISMA_SQL_STALE_EDGE_HOURS | 168 (7 days) | Force re-measure slow edges after this age |
| PRISMA_SQL_PLANNER_TIMEOUT_MS | 15000 | Total timeout for stats collection during prisma generate |
Example: background collection with persistent storage
For containerized deployments where the generated SQL directory should remain immutable from the image, write planner stats to a separate persistent path and load them at startup.
import { PrismaClient } from '@prisma/client'
import {
speedExtension,
loadExternalPlannerStats,
type SpeedClient,
} from './generated/sql'
import postgres from 'postgres'
import { execFile } from 'child_process'
const PLANNER_PATH = '/data/planner-stats/planner.generated.js'
loadExternalPlannerStats(PLANNER_PATH)
const sql = postgres(process.env.DATABASE_URL!)
const basePrisma = new PrismaClient()
export const prisma = basePrisma.$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof basePrisma>
execFile(
'node',
[
'node_modules/.bin/prisma-sql-collect-stats',
'--output',
PLANNER_PATH,
'--prisma-client',
'dist/prisma/generated/client/index.js',
],
(err) => {
if (err) return console.error('Stats collection failed:', err.message)
loadExternalPlannerStats(PLANNER_PATH)
console.log('Planner stats refreshed')
},
)With Docker Compose, mount only the stats directory:
services:
app:
image: myapp:latest
volumes:
- planner-stats:/data/planner-stats
volumes:
planner-stats:Do not mount the generated SQL output directory (./generated/sql). That directory contains the generated query client (index.js) which should stay immutable from the image. Mounting it as a volume causes new deploys to run stale generated code from the first deployment.
Example scripts
{
"prisma:generate": "PRISMA_SQL_SKIP_PLANNER=true prisma generate",
"collect-planner-stats": "prisma-sql-collect-stats --output dist/prisma/generated/sql/planner.generated.js --prisma-client dist/prisma/generated/client/index.js",
"start:production": "yarn collect-planner-stats; node dist/src/index.js"
}The semicolon (;) after collect-planner-stats ensures the server starts even if stats collection fails. Use && instead if you want startup to abort on failure.
What happens with default planner values
When stats are not collected, the planner uses conservative defaults:
roundtripRowEquivalent: 73jsonRowFactor: 1.5relationStats: empty (all relations treated as unknown cardinality)
This means the planner cannot make informed decisions about join strategies. Queries still work correctly — the planner falls back to safe general-purpose strategies — but relation-heavy reads may not use the optimal execution plan.
Practical recommendations
For best results with the planner:
- index all relation keys
- encode one-to-one relations with
@unique - use stable
orderBy - cap nested collections with
take - page parents before including deep trees
- split unrelated heavy branches into
$batch - benchmark with real data distributions, not toy fixtures
Batch queries
$batch runs multiple independent read queries in one PostgreSQL round trip.
const dashboard = await prisma.$batch((batch) => ({
totalUsers: batch.user.count(),
activeUsers: batch.user.count({
where: { status: 'ACTIVE' },
}),
recentProjects: batch.project.findMany({
take: 5,
orderBy: { createdAt: 'desc' },
include: { organization: true },
}),
taskStats: batch.task.aggregate({
_count: true,
_avg: { estimatedHours: true },
where: { status: 'IN_PROGRESS' },
}),
}))Rules
Do not await inside the callback.
Incorrect:
await prisma.$batch(async (batch) => ({
users: await batch.user.findMany(),
}))Correct:
await prisma.$batch((batch) => ({
users: batch.user.findMany(),
}))Best use cases
- dashboards
- analytics summaries
- counts + page data
- multiple independent aggregates
- splitting unrelated heavy reads instead of building one massive include tree
Limitations
- PostgreSQL only
- queries are independent
- not transactional
- use
$transactionwhen you need transactional guarantees
Configuration
Debug logging
const prisma = basePrisma.$extends(
speedExtension({
postgres: sql,
debug: true,
}),
) as SpeedClient<typeof basePrisma>Performance hook
const prisma = basePrisma.$extends(
speedExtension({
postgres: sql,
onQuery: (info) => {
console.log(`${info.model}.${info.method}: ${info.duration}ms`)
console.log(`prebaked=${info.prebaked}`)
},
}),
) as SpeedClient<typeof basePrisma>The callback receives:
interface QueryInfo {
model: string
method: string
sql: string
params: unknown[]
duration: number
prebaked: boolean
}Generator configuration
generator sql {
provider = "prisma-sql-generator"
// optional
// dialect = "postgres"
// optional
// output = "./generated/sql"
// optional
// skipInvalid = "true"
}@optimize examples
Basic prebaked query
/// @optimize {
/// "method": "findMany",
/// "query": {
/// "where": { "status": "ACTIVE" }
/// }
/// }
model User {
id Int @id
status String
}Dynamic parameters
/// @optimize {
/// "method": "findMany",
/// "query": {
/// "where": { "status": "$status" },
/// "skip": "$skip",
/// "take": "$take"
/// }
/// }
model User {
id Int @id
status String
}Nested include
/// @optimize {
/// "method": "findMany",
/// "query": {
/// "include": {
/// "posts": {
/// "where": { "published": true },
/// "orderBy": { "createdAt": "desc" },
/// "take": 5
/// }
/// }
/// }
/// }
model User {
id Int @id
posts Post[]
}Edge usage
Vercel Edge
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const prisma = new PrismaClient().$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof PrismaClient>
export const config = { runtime: 'edge' }
export default async function handler() {
const users = await prisma.user.findMany()
return Response.json(users)
}Cloudflare Workers
Use the standalone SQL generation API.
import { createToSQL } from 'prisma-sql'
import { MODELS } from './generated/sql'
const toSQL = createToSQL(MODELS, 'sqlite')
export default {
async fetch(request: Request, env: Env) {
const { sql, params } = toSQL('User', 'findMany', {
where: { status: 'ACTIVE' },
})
const result = await env.DB.prepare(sql)
.bind(...params)
.all()
return Response.json(result.results)
},
}Performance
Performance depends on:
- database type
- query shape
- indexing
- relation fan-out
- whether the query is prebaked
- whether the cardinality planner can choose a bounded strategy
Typical gains are strongest when:
- Prisma overhead dominates total time
- includes are moderate but structured well
- query shapes repeat
- indexes exist on relation and filter columns
Run your own benchmarks on production-shaped data.
Troubleshooting
speedExtension requires postgres or sqlite client
Pass a database-native client to the generated extension.
const prisma = new PrismaClient().$extends(speedExtension({ postgres: sql }))Generated dialect mismatch
If generated code targets PostgreSQL, do not pass SQLite, and vice versa.
Override dialect in the generator if needed.
generator sql {
provider = "prisma-sql-generator"
dialect = "postgres"
}Results differ from Prisma
Turn on debug logging and compare generated SQL with Prisma query logs.
const prisma = new PrismaClient().$extends(
speedExtension({
postgres: sql,
debug: true,
}),
)If behavior differs, open an issue with:
- schema excerpt
- Prisma query
- generated SQL
- expected result
- actual result
Performance is worse on a relation-heavy query
Check these first:
- missing foreign-key indexes
- deep unbounded includes
- no nested
take - unstable or missing
orderBy - high-fanout relation trees that should be split into
$batch
Connection pool exhaustion
Increase postgres.js pool size if needed.
const sql = postgres(process.env.DATABASE_URL!, {
max: 50,
})Limitations
Partially supported
- basic array operators
- basic JSON path filtering
Not yet supported
These should fall back to Prisma:
- full-text
search - composite/document-style embedded types
- vendor-specific extensions not yet modeled by the SQL builder
- some advanced
groupByedge cases
FAQ
Do I still need Prisma?
Yes. Prisma remains the source of truth for schema, migrations, types, writes, and fallback behavior.
Does this replace Prisma Client?
No. It extends Prisma Client.
What gets accelerated?
Supported read queries only.
What about writes?
Writes continue through Prisma.
Do I need @optimize?
No. It is optional. It only reduces the overhead of repeated hot query shapes.
Does $batch work with SQLite?
Not currently.
Is it safe to use in production?
Use it the same way you would adopt any query-path optimization layer: benchmark it on real data, compare against Prisma for parity, and keep Prisma fallback enabled for unsupported cases.
Migration
Before
const prisma = new PrismaClient()
const users = await prisma.user.findMany()After
import { PrismaClient } from '@prisma/client'
import { speedExtension, type SpeedClient } from './generated/sql'
import postgres from 'postgres'
const sql = postgres(process.env.DATABASE_URL!)
const basePrisma = new PrismaClient()
export const prisma = basePrisma.$extends(
speedExtension({ postgres: sql }),
) as SpeedClient<typeof basePrisma>
const users = await prisma.user.findMany()Examples
examples/generator-modetests/e2e/postgres.test.tstests/e2e/sqlite.e2e.test.tstests/sql-injection/batch-transaction.test.ts
Development
git clone https://github.com/multipliedtwice/prisma-to-sql
cd prisma-sql
npm install
npm run build
npm testLicense
MIT
