npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

pipework

v0.7.21

Published

TypeScript framework for multi-tenant SaaS applications. PostgreSQL-only.

Readme

Pipework

A TypeScript framework for multi-tenant SaaS applications. PostgreSQL-only. Owns the wiring between Drizzle, Fastify, Vitest, and Zod so your application code never has to.

What it does

Pipework eliminates the infrastructure decisions that every SaaS backend makes independently and gets wrong:

  • Database wiring — named connections, pooling, context-aware pipe() accessor, test isolation
  • Request context — AsyncLocalStorage propagation of auth, tenant, transactions from HTTP entry to database query
  • DI builder — type-safe handler composition with .use(), .auth(), .input(), .output(), .route(), .fit()
  • Multi-tenant scopingSET LOCAL propagation, tenant extraction from auth, UUID validation
  • Auth chain — pluggable strategies, enrichers, session management with JWT refresh rotation, cookie-based token delivery
  • Multi-org auth — org selection on login, org switching, membership management via auth.createMultiOrg()
  • RBAC — hierarchical scope resolution, permission caching, DI integration via .permission()
  • Background jobs — Postgres-backed queue with SKIP LOCKED, heartbeat, reaper, LISTEN/NOTIFY, synchronous wait (jobs.enqueueAndWait)
  • Cron scheduling — recurring jobs with cron expressions, dedup, catch-up on missed ticks
  • Temporal records — SCD Type 2 versioning with atomic temporal.revise(), point-in-time queries
  • Resource CRUDfixture builder with cursor-based pagination, batch operations (preview + execute), auth, tenant scoping, 404/405
  • Composable behaviorsbehavior.compose() to stack versioned + audited + cached on any resource
  • State machines — validated transitions with guards and audit integration
  • Pipelines — ordered sync/async step execution with persistent state and resume
  • HTTP security — CORS, Helmet, rate limiting via http.createServer(), production startup validation (refuses insecure defaults)
  • Structured logging — context-aware log proxy with automatic correlation fields (requestId, tenantId, userId, sessionId, traceId, jobType), configurable redaction, X-Request-Id response header
  • OpenAPI — automatic schema generation from handler metadata, served at /openapi.json

Quick example

// pipework.config.ts — the single entry point
import { createManifold } from 'pipework'

export default createManifold({
  databases: {
    app: {
      url: 'DATABASE_URL',
      testUrl: 'DATABASE_URL_TEST',
      schema: './src/db/schema.ts',
      migrations: './migrations/app',
    },
  },
})
// src/domains.ts — define once, project everywhere
import { pipe } from 'pipework'

export const User = pipe.define('users', {
  id: pipe.field.uuid().brand('UserId').primaryKey().defaultRandom(),
  name: pipe.field.text().min(1).max(255),
  email: pipe.field.text().email().unique(),
  role: pipe.field.enum(['admin', 'member'] as const),
  tenantId: pipe.field.uuid().tenant(),
})

// User.table()        → Drizzle pgTable
// User.insertShape()  → zod validator (omits PK, optionalizes defaults)
// User.selectShape()  → zod validator (all fields)
// User.factory()      → test data builder with FK resolution
// User.Select         → { id: Branded<string, 'UserId'>; name: string; ... }
// src/server.ts — import the manifold, build your app
import pipework from '../pipework.config.js'
import { http, fitting, schema } from 'pipework'
import { User } from './domains.js'

const createNote = fitting
  .use()
  .auth<{ userId: string; tenantId: string }>()
  .input(schema.check.object({ title: schema.check.string(), body: schema.check.string() }))
  .route('POST', '/notes')
  .fit(async ({ db, auth, input }) => {
    const [note] = await db
      .insert(notes)
      .values({ ...input, tenantId: auth.tenantId })
      .returning()
    return note
  })

const server = http.createServer(pipework, {
  auth: { strategy: myAuthStrategy },
  tenant: { extract: (auth) => auth.tenantId },
})

server.registerHandlers([createNote])
await server.listen()

Install

pnpm add pipework
pnpm add -D vitest

Then scaffold your project:

npx pipework init

This creates pipework.config.ts at your project root — the single entry point for config, runtime instance, CLI, and test setup.

Testing

// vitest.config.ts — integration tests (auto DB setup + per-test isolation)
import { defineTestConfig } from 'pipework/vitest'
export default defineTestConfig()

// vitest.unit.config.ts — pure unit tests (no DB required)
import { defineTestConfig } from 'pipework/vitest'
export default defineTestConfig({ database: false })
# .env.test
DATABASE_URL_TEST=postgresql://user:pass@localhost:5432/myapp_test

Every integration test runs in a rollback transaction. No cleanup, no state leaks. Unit test configs with database: false skip all DB machinery — safe to use in CI steps without a database.

Documentation

API documentation lives in two places:

  • REFERENCE.md — auto-generated from source JSDoc. Complete API surface, organized by namespace.
  • JSDoc on every public export — enforced by pnpm lint:jsdoc. Read inline in your editor or via the reference file.

For Claude Code users, the /pipework skill provides usage guidance, API reference, and project auditing.

Design principles

  1. Pipework first. Orchestrates Drizzle, Fastify, Vitest, and Zod so your application code doesn't have to.
  2. Constraints over conventions. If a rule can be enforced in code, it is.
  3. Fail at startup, not at first use. Missing config throws immediately with what's wrong and how to fix it.
  4. PostgreSQL-only. No database abstraction. Build deep on Postgres features.
  5. TypeScript strict mode. exactOptionalPropertyTypes, noUncheckedIndexedAccess — all enabled.

Tech stack

  • PostgreSQL — the only supported database
  • postgres.js — connection driver
  • Query engine — forked from Drizzle ORM, stripped to Postgres-only, owned by pipework
  • Migration engine — own snapshot/diff/SQL pipeline, no external generation tool
  • Zod — runtime validation with TypeScript inference
  • Fastify — HTTP framework (wrapped, not replaced)
  • Vitest — test runner
  • Pino — structured logging

License

MIT