pipework
v0.7.21
Published
TypeScript framework for multi-tenant SaaS applications. PostgreSQL-only.
Maintainers
Readme
Pipework
A TypeScript framework for multi-tenant SaaS applications. PostgreSQL-only. Owns the wiring between Drizzle, Fastify, Vitest, and Zod so your application code never has to.
What it does
Pipework eliminates the infrastructure decisions that every SaaS backend makes independently and gets wrong:
- Database wiring — named connections, pooling, context-aware
pipe()accessor, test isolation - Request context — AsyncLocalStorage propagation of auth, tenant, transactions from HTTP entry to database query
- DI builder — type-safe handler composition with
.use(),.auth(),.input(),.output(),.route(),.fit() - Multi-tenant scoping —
SET LOCALpropagation, tenant extraction from auth, UUID validation - Auth chain — pluggable strategies, enrichers, session management with JWT refresh rotation, cookie-based token delivery
- Multi-org auth — org selection on login, org switching, membership management via
auth.createMultiOrg() - RBAC — hierarchical scope resolution, permission caching, DI integration via
.permission() - Background jobs — Postgres-backed queue with
SKIP LOCKED, heartbeat, reaper,LISTEN/NOTIFY, synchronous wait (jobs.enqueueAndWait) - Cron scheduling — recurring jobs with cron expressions, dedup, catch-up on missed ticks
- Temporal records — SCD Type 2 versioning with atomic
temporal.revise(), point-in-time queries - Resource CRUD —
fixturebuilder with cursor-based pagination, batch operations (preview + execute), auth, tenant scoping, 404/405 - Composable behaviors —
behavior.compose()to stack versioned + audited + cached on any resource - State machines — validated transitions with guards and audit integration
- Pipelines — ordered sync/async step execution with persistent state and resume
- HTTP security — CORS, Helmet, rate limiting via
http.createServer(), production startup validation (refuses insecure defaults) - Structured logging — context-aware
logproxy with automatic correlation fields (requestId, tenantId, userId, sessionId, traceId, jobType), configurable redaction,X-Request-Idresponse header - OpenAPI — automatic schema generation from handler metadata, served at
/openapi.json
Quick example
// pipework.config.ts — the single entry point
import { createManifold } from 'pipework'
export default createManifold({
databases: {
app: {
url: 'DATABASE_URL',
testUrl: 'DATABASE_URL_TEST',
schema: './src/db/schema.ts',
migrations: './migrations/app',
},
},
})// src/domains.ts — define once, project everywhere
import { pipe } from 'pipework'
export const User = pipe.define('users', {
id: pipe.field.uuid().brand('UserId').primaryKey().defaultRandom(),
name: pipe.field.text().min(1).max(255),
email: pipe.field.text().email().unique(),
role: pipe.field.enum(['admin', 'member'] as const),
tenantId: pipe.field.uuid().tenant(),
})
// User.table() → Drizzle pgTable
// User.insertShape() → zod validator (omits PK, optionalizes defaults)
// User.selectShape() → zod validator (all fields)
// User.factory() → test data builder with FK resolution
// User.Select → { id: Branded<string, 'UserId'>; name: string; ... }// src/server.ts — import the manifold, build your app
import pipework from '../pipework.config.js'
import { http, fitting, schema } from 'pipework'
import { User } from './domains.js'
const createNote = fitting
.use()
.auth<{ userId: string; tenantId: string }>()
.input(schema.check.object({ title: schema.check.string(), body: schema.check.string() }))
.route('POST', '/notes')
.fit(async ({ db, auth, input }) => {
const [note] = await db
.insert(notes)
.values({ ...input, tenantId: auth.tenantId })
.returning()
return note
})
const server = http.createServer(pipework, {
auth: { strategy: myAuthStrategy },
tenant: { extract: (auth) => auth.tenantId },
})
server.registerHandlers([createNote])
await server.listen()Install
pnpm add pipework
pnpm add -D vitestThen scaffold your project:
npx pipework initThis creates pipework.config.ts at your project root — the single entry point for config, runtime instance, CLI, and test setup.
Testing
// vitest.config.ts — integration tests (auto DB setup + per-test isolation)
import { defineTestConfig } from 'pipework/vitest'
export default defineTestConfig()
// vitest.unit.config.ts — pure unit tests (no DB required)
import { defineTestConfig } from 'pipework/vitest'
export default defineTestConfig({ database: false })# .env.test
DATABASE_URL_TEST=postgresql://user:pass@localhost:5432/myapp_testEvery integration test runs in a rollback transaction. No cleanup, no state leaks. Unit test configs with database: false skip all DB machinery — safe to use in CI steps without a database.
Documentation
API documentation lives in two places:
- REFERENCE.md — auto-generated from source JSDoc. Complete API surface, organized by namespace.
- JSDoc on every public export — enforced by
pnpm lint:jsdoc. Read inline in your editor or via the reference file.
For Claude Code users, the /pipework skill provides usage guidance, API reference, and project auditing.
Design principles
- Pipework first. Orchestrates Drizzle, Fastify, Vitest, and Zod so your application code doesn't have to.
- Constraints over conventions. If a rule can be enforced in code, it is.
- Fail at startup, not at first use. Missing config throws immediately with what's wrong and how to fix it.
- PostgreSQL-only. No database abstraction. Build deep on Postgres features.
- TypeScript strict mode.
exactOptionalPropertyTypes,noUncheckedIndexedAccess— all enabled.
Tech stack
- PostgreSQL — the only supported database
- postgres.js — connection driver
- Query engine — forked from Drizzle ORM, stripped to Postgres-only, owned by pipework
- Migration engine — own snapshot/diff/SQL pipeline, no external generation tool
- Zod — runtime validation with TypeScript inference
- Fastify — HTTP framework (wrapped, not replaced)
- Vitest — test runner
- Pino — structured logging
License
MIT
