npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@enricodeleo/crudio

v0.4.0

Published

Turn an OpenAPI spec into a working CRUD backend with persistence and seeding

Readme


What is Crudio?

Crudio reads an OpenAPI 3.x specification and spins up a working mock API with persistence — stateful, schema-driven, derived entirely from your contract. CRUD endpoints behave like a small real backend; non-CRUD endpoints keep per-operation state so the whole spec is servable from one runtime.

Why

Mock servers return canned responses. That's fine for smoke tests, but not enough when you need to verify that your frontend actually handles pagination, validation errors, 404s, and partial updates correctly.

Crudio gives you a backend that behaves like a real one — because it derives everything from your API contract:

  • CRUD request bodies are validated against your schema
  • data persists across calls (JSON files, no database needed)
  • IDs are generated based on your spec (integers, UUIDs, etc.)
  • CRUD routes use shared resource state
  • non-CRUD routes use persisted operation state

Use it for: integration testing, frontend development, API prototyping, contract verification.

Don't use it for: production backends, load testing, or anything that needs domain-specific business logic without custom handlers.

Quick Start

# Run against any OpenAPI 3.x spec
npx crudio ./openapi.yaml

# With fake data
npx crudio ./openapi.yaml --seed 10

# Custom port and storage
npx crudio ./openapi.yaml --port 8080 --data-dir /tmp/data

Given a standard CRUD spec with paths like /pets and /pets/{petId}, you get:

# Create
curl -X POST http://localhost:3000/pets \
  -H 'Content-Type: application/json' \
  -d '{"name":"Rex","tag":"dog"}'
# → 201 {"id":1,"name":"Rex","tag":"dog"}

# List (with filtering and pagination)
curl http://localhost:3000/pets?tag=dog&limit=10&offset=0
# → 200 {"items":[{"id":1,"name":"Rex","tag":"dog"}],"total":1}

# Get by ID
curl http://localhost:3000/pets/1
# → 200 {"id":1,"name":"Rex","tag":"dog"}

# Partial update
curl -X PATCH http://localhost:3000/pets/1 \
  -H 'Content-Type: application/json' \
  -d '{"tag":"cat"}'
# → 200 {"id":1,"name":"Rex","tag":"cat"}

# Delete
curl -X DELETE http://localhost:3000/pets/1
# → 204

Invalid CRUD requests are rejected against your schema:

curl -X POST http://localhost:3000/pets \
  -H 'Content-Type: application/json' \
  -d '{"tag":"dog"}'
# → 400 {"error":"Validation failed","details":[...]}

How It Works

  1. Load — reads your OpenAPI 3.x spec and dereferences all $ref pointers
  2. Compile — normalizes every OpenAPI operation into a single registry entry
  3. Infer — detects CRUD resource pairs from path patterns (/users + /users/{id})
  4. Validate — compiles AJV validators from your CRUD resource schemas for strict request checking
  5. Route — registers Express routes for every operation defined in the spec
  6. Adapt — optionally applies declarative rules or wraps an operation with a custom JavaScript handler
  7. Persist — stores CRUD-backed resources and operation-state payloads in JSON files

CRUD-shaped operations share resource state. Everything else is served as operation-state: the response body is persisted per operation scope and replayed on later reads, with optional projection into a parent resource when the response schema is a compatible subset.

Declarative Rules

For many non-trivial endpoints you can stay in config and avoid JavaScript entirely.

export default {
  operations: {
    login: {
      rules: [
        {
          name: 'admin-login',
          if: { eq: [{ ref: 'req.body.email' }, '[email protected]'] },
          then: {
            writeState: {
              token: 'mock-token',
              role: 'admin',
            },
            respond: {
              status: 200,
              body: { ref: 'state.current' },
            },
          },
        },
      ],
    },
  },
};

Stage 5 rules are:

  • first match wins
  • limited to eq, exists, and in
  • limited to writeState, mergeState, patchResource, and respond
  • operation-state writes stay local to the current operation
  • patchResource can shallow-patch only the inferred linked CRUD resource item

If a route has rules and no rule matches, Crudio falls back to the built-in runtime. If a route has both rules and a JS handler, no-match is an explicit 500 instead of a silent handler fallback.

When patchResource runs, resource.current becomes the post-patch snapshot for the rest of that rule, so respond can return the updated linked resource without JavaScript. If the linked item does not exist, the rule returns 404.

Custom Handlers

When declarative rules are not enough, you can override or wrap any operation with JavaScript in crudio.config.js.

export default {
  operations: {
    createPet: {
      handler: async (ctx) => {
        const created = await ctx.nextDefault();
        return ctx.json(created.status, { ...created.body, source: 'custom' });
      },
    },
    startRelease: {
      handler: './handlers/startRelease.js',
    },
  },
};

Available ctx helpers:

  • ctx.req — normalized params, query, body, headers
  • ctx.state — read/write operation-state for the current scope
  • ctx.resources — CRUD helpers over inferred resources
  • ctx.storage — raw storage access
  • ctx.json(status, body, headers?) — return a normalized response descriptor
  • ctx.nextDefault() — run the built-in runtime once, then wrap or replace it

Custom handlers work on both CRUD and non-CRUD routes. CRUD request validation still runs before the handler, and response validation follows validateResponses. When rules and handler coexist on the same operation, rules run first.

Supported / Unsupported

Supported

  • OpenAPI 3.0 ($ref, allOf, path parameters, request/response schemas)
  • CRUD operations: list, getById, create, update, patch, delete
  • non-CRUD operations with persisted per-operation state
  • CRUD request validation against schema
  • Pagination (limit, offset) and equality filters
  • Schema-driven ID generation (incremental integer, UUID, string)
  • Fake data seeding for CRUD resources
  • explicit default and per-scope seeding for non-CRUD operations
  • Programmatic usage as a Node.js library

Not supported (v1)

  • oneOf, anyOf, discriminators — Crudio fails fast with a clear error if these are present
  • Swagger 2.0
  • domain-specific business logic inference
  • Sorting, nested filters, full-text search
  • File uploads / multipart

CLI

Usage: crudio <spec-file> [options]

  --port, -p <number>      Port (default: 3000)
  --data-dir, -d <path>    Storage directory (default: ./data)
  --seed, -s <number>      Seed N fake records per resource
  --config, -c <path>      Path to config file

Documentation

  • API Reference — endpoints, status codes, validation, query params, ID generation
  • Configuration — config file, resource/operation overrides, seeding options
  • Development — project structure, running tests, architecture

Ecosystem

Crudio pairs well with AquaSDK, a JavaScript SDK generator for OpenAPI specs.

  • Crudio: run the backend from the spec
  • AquaSDK: generate the client from the same spec

License

MIT