@ma.vu/sqlove
v0.1.1
Published
Type-safe SQL in TypeScript, powered by Effect
Maintainers
Readme
💚 sqlove
Type-safe SQL in TypeScript, powered by Effect.
Write plain .sql files. Run sqlove. Get fully typed Effect functions with Schema-validated row types. No ORM, no query builder, no magic strings. Just SQL — embraced.
Inspired by Squirrel for Gleam. Same philosophy, same conventions, built for TypeScript and the Effect ecosystem.
Why
Every ORM sells you the same pitch: "you don't need to write SQL." But here's what actually happens:
- You learn the ORM
- You learn SQL anyway (because you have to)
- You fight the ORM
- You drop into raw queries for the hard stuff
- You maintain two mental models forever
The ORM didn't remove SQL from your life. It added a layer on top of it — a layer you now need to learn, debug, and work around. And when something breaks at 3am, you're not reading your query. You're reading the ORM's source code, searching "TypeORM QueryBuilder n+1 problem" on Stack Overflow, debugging a .leftJoinAndSelect().withRelations().eager() chain that generates SQL you can't predict.
sqlove takes the opposite approach. You write SQL. You keep full control. The only thing sqlove does is delete the boilerplate — the type definitions, the parameter wiring, the row decoding. The code you would have written by hand, minus the tedium.
That's it. There's nothing else to learn. There's nothing else to break.
How it works
You write a query:
-- src/todos/sql/find_by_tag.sql
-- Find todos that have a given tag.
SELECT id, title, priority, tags, done
FROM todo
WHERE $1 = ANY(tags)
ORDER BY created_at DESCRun sqlove:
DATABASE_URL=postgres://... npx sqloveGet a typed Effect function:
// src/todos/sql.ts (auto-generated)
export class FindByTagRow extends Schema.Class<FindByTagRow>("FindByTagRow")({
id: Schema.Number,
title: Schema.String,
priority: TodoPriority,
tags: Schema.Array(Schema.String),
done: Schema.Boolean,
}) {}
export const findByTag = (
params: { readonly arg1: string }
): Effect.Effect<ReadonlyArray<FindByTagRow>, SqlError, SqlClient> =>
SqlClient.pipe(Effect.flatMap((sql) =>
sql<FindByTagRow>`SELECT id, title, priority, tags, done
FROM todo
WHERE ${params.arg1} = ANY(tags)
ORDER BY created_at DESC`
));Use it:
const todos = yield* findByTag({ arg1: "work" })That's the whole tool.
The generated code
sqlove generates code that targets Effect + @effect/sql + Schema. Here's what you get:
Schema.Classfor row types — runtime decodable, statically typedSchema.Literalfor Postgres enums — TypeScript string unionsSchema.NullOrfor nullable columnsEffect.Effect<ReadonlyArray<Row>, SqlError, SqlClient>for queriesEffect.Effect<void, SqlError, SqlClient>for mutations withoutRETURNING- No-param queries are
constvalues, param queries are functions - SQL
$1params get names inferred from context (WHERE email = $1→{ email: string })
The connection is a service in the Effect context. Errors are typed in the error channel. You never pass db around — you provide the layer once at the edge and compose freely.
Quick start
npm install sqlove --save-devYour project needs the Effect SQL packages (they're peer dependencies):
npm install effect @effect/sql @effect/sql-pg @effect/platform @effect/experimental1. Have a database
sqlove reads types from a running Postgres. You manage the schema however you want — raw SQL migrations, dbmate, Flyway, doesn't matter. sqlove only reads, never writes to your database.
2. Write .sql files
Create a sql/ directory anywhere under src/. Each .sql file contains exactly one query:
src/
todos/
sql/
list_todos.sql
create_todo.sql
complete_todo.sql
sql.ts ← generated by sqlove3. Generate
# One-shot (CI / build)
DATABASE_URL=postgres://... npx sqlove
# Verify generated code is current (CI check)
DATABASE_URL=postgres://... npx sqlove check4. Use
import { Effect, ManagedRuntime, Redacted } from "effect"
import { PgClient } from "@effect/sql-pg"
import { listTodos, completeTodo, stats } from "./todos/sql.js"
const runtime = ManagedRuntime.make(
PgClient.layer({ url: Redacted.make(process.env.DATABASE_URL!) })
)
const program = Effect.gen(function* () {
const todos = yield* listTodos
const [todo] = todos
yield* completeTodo({ id: todo.id })
const [counts] = yield* stats
yield* Effect.log(`${counts.done}/${counts.total} done`)
})
runtime.runPromise(program)Conventions
| Convention | Detail |
|---|---|
| Directory | Any directory named sql/ under your source root |
| One query per file | find_user.sql → findUser function |
| Output location | sql.ts placed next to each sql/ directory |
| Function names | File name snake_case → camelCase |
| Row types | PascalCase + Row suffix (FindUserRow) |
| Enum types | Postgres enum user_status → UserStatus string union |
| Comments | Leading -- comments become JSDoc |
| Parameters | $1, $2 — names inferred from SQL context |
Parameter name inference
sqlove reads your SQL and figures out what $1 means:
WHERE email = $1 → { email: string }
WHERE age >= $2 → { age: number }
INSERT INTO t (a, b) VALUES ($1, $2) → { a: ..., b: ... }
SET name = $2 WHERE id = $1 → { id: ..., name: ... }Falls back to arg1, arg2 when it can't infer.
Nullability detection
sqlove detects nullable columns automatically across four layers:
Layer 1: Table constraints
Columns with NOT NULL in pg_attribute are non-nullable. Columns without it are nullable. This covers most cases.
Layer 2: Outer joins
EXPLAIN (GENERIC_PLAN) gives us the query plan tree. We walk it to find which table aliases are on the nullable side of LEFT/RIGHT/FULL joins. Same approach as Squirrel. Handles schema-qualified names, subqueries, CTEs, LATERAL, nested parens — Postgres resolves everything, we just read the plan.
Layer 3: Expression nullability (AST + pg_proc.proisstrict)
libpg-query — Postgres's own C parser compiled to WASM — parses the SQL into a typed AST. pg_proc.proisstrict tells us which functions return NULL on NULL input (strict functions). Together they handle function null propagation without hardcoded lists:
| SQL expression | How detected | Nullable? |
|---|---|---|
| upper(bio), abs(age), trim(x) | pg_proc.proisstrict + nullable arg | ✅ strict fn, null propagates |
| upper(name) (NOT NULL col) | pg_proc.proisstrict + non-null arg | ❌ strict fn, but arg is non-null |
| age + 1 (nullable col) | operator → pg_proc.proisstrict | ✅ arithmetic propagates null |
| max(x), min(x), sum(x), avg(x) | FuncCall (SQL spec) | ✅ null on zero rows |
| string_agg(x, ','), array_agg(x) | FuncCall (SQL spec) | ✅ null on zero rows |
| count(*), count(x) | FuncCall (SQL spec) | ❌ always returns a number |
| coalesce(x, default) | CoalesceExpr + arg analysis | ❌ at least one non-null arg |
| coalesce(nullable, nullable) | CoalesceExpr + arg analysis | ✅ ALL args nullable |
| metadata->>'key', data->'key' | A_Expr(->>/->) | ✅ key might not exist |
| NULLIF(a, b) | A_Expr(NULLIF) | ✅ null when equal |
| CASE WHEN x THEN y END | CaseExpr | ✅ no ELSE = implicit null |
| lag(x) OVER (...), lead(x) | FuncCall + OVER | ✅ null at boundary |
| sum(x) OVER (...) | FuncCall + OVER | ❌ window partition always has rows |
| (SELECT x FROM t LIMIT 1) | SubLink | ✅ null on no match |
| x::type | TypeCast | inherits inner |
Layer 4: CTE / subquery passthrough
Columns from CTEs and subqueries lose their tableOID. For these, we check pg_attribute by column name — if any table in the database has that column as nullable, we mark it nullable.
Layer 5: WHERE clause narrowing
The NULL-substitution technique (from StarRocks / PostgreSQL's own planner): substitute NULL for a column in the WHERE predicate. If the predicate becomes FALSE or NULL → row would be filtered → column is non-null in results.
-- All of these make bio non-null in results:
WHERE bio IS NOT NULL -- direct null test
WHERE bio = $1 -- = is strict, NULL = x → NULL → filtered
WHERE length(bio) > 0 -- length is strict, length(NULL) → NULL
WHERE bio IS NOT NULL AND age > 18 -- AND: any arm rejects nulls-- This does NOT make bio non-null:
WHERE bio IS NOT NULL OR age > 18 -- OR: other branch can match null-bio rowsLayer 6: Manual override — ? and !
For the rare edge cases the tool can't detect:
-- Force nullable: custom function that might return null
SELECT my_custom_func(name) AS "result?" FROM users
-- Force non-null: you know something the tool can't prove
SELECT bio AS "bio!" FROM users WHERE some_complex_conditionThe suffix is stripped from the generated field name. Postgres never sees it.
Known limitations
| Pattern | What the tool does | Why | |---|---|---| | Non-strict custom function returning null | Says not nullable | Can't know function behavior | | Strict function returning null on non-null input | Says not nullable | Rare — proisstrict only guarantees null-in→null-out | | WHERE in nested CTE/view | Not analyzed | Only top-level WHERE is checked |
Type mapping
| Postgres | Schema | TypeScript | Notes |
|---|---|---|---|
| bool | Schema.Boolean | boolean | |
| int2, int4 | Schema.Number | number | |
| int8 | Schema.String | string | JS loses precision on bigint |
| float4, float8 | Schema.Number | number | |
| numeric | Schema.String | string | Arbitrary precision |
| text, varchar, char | Schema.String | string | |
| uuid | Schema.String | string | |
| json, jsonb | Schema.Unknown | unknown | You narrow it |
| timestamp, timestamptz | Schema.DateFromString | Date | |
| date, time, interval | Schema.String | string | |
| bytea | Schema.instanceOf(Buffer) | Buffer | |
| <type>[] | Schema.Array(...) | ReadonlyArray<T> | Recursive |
| User enum | Schema.Literal(...) | String union | Auto-detected |
| Nullable column | Schema.NullOr(...) | T \| null | Via pg_attribute |
Composition
The generated functions are just Effects. You compose them with yield*:
// Generated — single database operations (atoms)
listTodos // one SELECT, no params, it's a const
getTodo({ id: 1 }) // one SELECT, has params, it's a function call
deleteTodo({ id: 1 }) // one DELETE
// Composed — your business logic (molecules)
export const toggleTodo = (id: number) =>
Effect.gen(function* () {
const [todo] = yield* getTodo({ id })
if (todo.done) {
const [reopened] = yield* reopenTodo({ id: todo.id })
yield* Effect.log(`Reopened: "${reopened.title}"`)
return reopened
}
const [completed] = yield* completeTodo({ id: todo.id })
yield* Effect.log(`Completed: "${completed.title}"`)
return completed
})sqlove generates the atoms. You compose the molecules. Same type, same yield*, same error channel. The boundary between generated and hand-written disappears.
With Hono
import { Hono } from "hono"
import { ManagedRuntime, Redacted } from "effect"
import { PgClient } from "@effect/sql-pg"
import { listTodos, createTodo } from "./todos/sql.js"
import { toggleTodo, purgeCompleted } from "./workflows.js"
const runtime = ManagedRuntime.make(
PgClient.layer({ url: Redacted.make(process.env.DATABASE_URL!) })
)
const app = new Hono()
app.get("/todos", async (c) =>
c.json(await runtime.runPromise(listTodos))
)
app.post("/todos", async (c) => {
const body = await c.req.json()
const [row] = await runtime.runPromise(
createTodo({ title: body.title, description: body.description, priority: body.priority })
)
return c.json(row, 201)
})
app.post("/todos/:id/toggle", async (c) =>
c.json(await runtime.runPromise(toggleTodo(Number(c.req.param("id")))))
)
app.post("/todos/purge", async (c) =>
c.json(await runtime.runPromise(purgeCompleted))
)Build the runtime once. Each handler is one line. Hono doesn't know about Effect. Effect doesn't know about Hono. sqlove doesn't know about either. Three layers, zero overlap.
The database is your source of truth
sqlove does not manage your schema. It doesn't create tables, run migrations, or generate DDL. It asks your running Postgres — via the wire protocol — "if I run this query, what types come back?" and generates TypeScript from the answer.
You manage migrations however you want:
src/
db/
migrations/
001_create_users.sql
002_add_status_enum.sql
003_add_tags_column.sql
users/
sql/
find_user.sql
list_users.sql
sql.ts ← generated by sqloveThe flow:
1. Write migration ← any tool: dbmate, flyway, raw psql
2. Run migration ← schema changes in the live DB
3. Write .sql query files ← your queries
4. Run sqlove ← types derived from the live DB
5. CI: sqlove check ← ensures generated code matchesNo schema file to keep in sync. No ORM model to update. The database is the truth, and sqlove reads it directly.
Architecture
Three phases, under 600 lines total:
| Phase | What it does |
|---|---|
| Discover | Walk src/**/sql/*.sql, collect files |
| Introspect | Send each query to Postgres via extended query protocol (Parse + Describe), resolve types from pg_type / pg_attribute / pg_enum |
| Generate | Emit deterministic TypeScript (sorted alphabetically, only written when content changes) |
No query is ever executed. sqlove uses Postgres protocol-level introspection only.
Performance: 50 queries described, typed, and generated in ~45ms. The bottleneck is Postgres round-trips. Parser and codegen are sub-millisecond.
CLI
sqlove Generate typed code (one-shot)
sqlove check Verify generated files are current (CI)
sqlove --src <dir> Source directory (default: ./src)
sqlove --help Help
sqlove --version VersionConnection
Set DATABASE_URL or individual vars:
DATABASE_URL=postgres://user:pass@host:5432/dbname
# Or
PGHOST=localhost PGPORT=5432 PGUSER=postgres PGDATABASE=myappInfluence
sqlove is a direct port of the ideas from Squirrel by Giacomo Cavalieri. Squirrel does the same thing for Gleam — plain SQL files in, typed functions out, convention over configuration, the database as the source of truth. If you're writing Gleam, use Squirrel. If you're writing TypeScript with Effect, use sqlove. The philosophy is identical.
The tagline says it best: instead of trying to hide SQL, embrace it and leave you in control.
Development
git clone https://github.com/ultrox/sqlove
cd sqlove
npm install
docker compose up -d --wait # Postgres on :5555, schema auto-applied
npm run typecheck # type check
npm test # 106 tests, all real, zero mocks
npm run test:coverage # coverage reportLicense
Apache-2.0
