@frogfish/k2db
v3.0.8
Published
A data handling library for K2 applications.
Readme
k2db
Lightweight MongoDB data layer that stays schemaless by default while enforcing consistent metadata and soft-delete.
Why this, not an ORM?
- Thin driver wrapper: Sits directly atop the official MongoDB driver. No models, decorators, or migrations to manage.
- Loose by default: Store arbitrary data while consistently enforcing
_uuid,_owner,_created,_updated,_deleted. - Opinionated guardrails: Soft‑delete and metadata are enforced everywhere, including aggregates, without changing your payloads.
- Opt‑in structure: Add Zod schemas per collection only if you want validation/coercion; skip entirely if you don’t.
- Predictable behavior: No hidden population, no query magic, and explicit return types. What you query is what runs.
What you get
- Concrete API for Mongo: Avoid re‑implementing the same metadata, soft‑delete, and versioning patterns across services. This wrapper centralizes those policies so teams don’t write boilerplate.
- Guardrails without heavy ORM: Prisma/Mongoose add ceremony (models, migrations, plugins) that can be overkill in microservices/serverless. k2db gives you just enough safety with minimal overhead.
- Soft deletes done properly: Automatically enforced everywhere (including aggregates and joins), so you don’t accidentally leak or operate on deleted data.
- Versioning baked in:
updateVersioned,listVersions, andrevertToVersionprovide low‑friction “undo to N levels” that many DALs lack, increasing confidence in production changes.
Where it fits in the stack
- Below your API/service layer and above the MongoDB driver.
- Use it as a shared data access layer (DAL) across services that need flexible shapes but strict lifecycle rules.
- Keep ownership/authorization in your API; this library only guarantees metadata and deletion semantics.
- Designed for microservices and edge computing: tiny footprint, fast cold starts, and no heavy runtime dependencies.
Deployment tips (Nomad, Lambda, etc.)
Environments: Targets Node.js runtimes (Node 18/20). Not suitable for non‑TCP “edge JS” (e.g., Cloudflare Workers) that cannot open Mongo sockets.
Connection reuse: Create and reuse
K2DBinstances.- The underlying MongoDB connection pool is shared across
K2DBinstances created with the same cluster/auth settings (hosts/user/password/authSource/replicaset). - This means you can safely keep one
K2DBinstance per logical database name (name) without creating a new TCP pool per database. release()is ref-counted: it only closes the shared pool when the last instance releases it.
- The underlying MongoDB connection pool is shared across
Example (AWS Lambda):
import { K2DB } from "@frogfish/k2db"; const db = new K2DB(K2DB.fromEnv()); let ready: Promise<void> | null = null; export const handler = async (event) => { ready = ready || db.init(); await ready; // reused across warm invocations const res = await db.find("hello", {}, {}, 0, 10, event.userId); return { statusCode: 200, body: JSON.stringify(res) }; };If you serve multiple logical databases (multi-project / multi-tenant), cache
K2DBby database name. Instances will still share a single underlying connection pool:import { K2DB } from "@frogfish/k2db"; const base = { hosts: [{ host: "cluster0.example.mongodb.net" }], user: process.env.DB_USER, password: process.env.DB_PASS }; const byName = new Map<string, K2DB>(); function dbFor(name: string) { let db = byName.get(name); if (!db) { db = new K2DB({ ...base, name }); byName.set(name, db); } return db; }Pooling and timeouts: The MongoDB driver manages a small pool by default, and k2db reuses that pool across
K2DBinstances that share cluster/auth config.- Serverless: keep
minPoolSize=0(default), considermaxIdleTimeMSto drop idle sockets faster. - Long‑lived services (Nomad): you can tune pool sizing if needed.
- You can adjust
connectTimeoutMS/serverSelectionTimeoutMSin the code if your environment needs higher values.
- Serverless: keep
Networking:
- Atlas from Lambda: prefer VPC + PrivateLink or NAT egress; ensure security groups allow outbound to Atlas.
- Nomad: ensure egress to Atlas/DB, or run Mongo in the same network; set DNS to resolve SRV if using SRV.
Secrets:
- Lambda: use AWS Secrets Manager/Parameter Store → env vars consumed by
K2DB.fromEnv(). - Nomad: pair with HashiCorp Vault templates/env inject; keep credentials out of images.
- Lambda: use AWS Secrets Manager/Parameter Store → env vars consumed by
Health/readiness:
- Use
db.isHealthy()in readiness checks (Nomad) anddb.release()on shutdown. For Lambda there’s no explicit shutdown.
- Use
Bundling:
- ESM friendly; keep dependencies minimal. If you bundle, exclude native modules you don’t use.
When to pick this
- You want Mongo’s flexibility with light, reliable guardrails (soft delete, timestamps, owner, UUID).
- You’d rather not pull in a heavier ORM (Mongoose/Prisma) and prefer direct control of queries and indexes.
- Your payloads vary by tenant/feature and rigid schemas get in the way, but you still want optional validation.
When to consider something else
- Rich modeling, relations, and ecosystem plugins → Mongoose.
- Cross‑DB modeling, migrations, and schema‑first DX → Prisma.
- If you already standardized on an ORM and like its trade‑offs, this library aims to stay out of your way.
Core invariants
- _uuid: unique identifier generated on create (unique among non-deleted by default).
- _created/_updated: timestamps managed by the library.
- _owner: required on create; preserved across updates.
- _deleted: soft-delete flag. All reads exclude deleted by default; writes do not touch deleted docs. Purge hard-deletes only when
_deleted: true.
Modernized behavior
- ESM build, TypeScript, NodeNext resolution.
- Soft-delete enforced across find, findOne, count, update, updateAll.
- Aggregate never returns deleted documents (also enforced across
$lookup,$unionWith,$graphLookup,$facet). - Reserved fields: user input cannot set keys starting with
_; the library owns all underscore-prefixed metadata. - Slow query logging: operations slower than
slowQueryMs(default 200ms) are logged viadebug("k2:db"). - Hooks: optional
beforeQuery(op, details)andafterQuery(op, details, durationMs)for observability. - Index helper:
ensureIndexes(collection, { uuidPartialUnique: true, ownerIndex: true, deletedIndex: true }).
Ownership (_owner)
- Purpose:
_owneris not a tenant ID nor a technical auth scope. It’s a required, opinionated piece of metadata that records who a document belongs to (the data subject or system principal that created/owns it). - Why it matters: Enables clear data lineage and supports privacy/jurisdiction workflows (GDPR/DSAR: “export all my data”, “delete my data”), audits, and stewardship.
- Typical values: a user’s UUID when a signed-in human creates the record; for automated/system operations use a stable identifier like
"system","service:mailer", or"migration:2024-09-01". - Not authentication: k2db does not authenticate callers. Your API/service still decides who the caller is and whether they are allowed to act.
- Optional enforcement: k2db can enforce owner scoping when you provide a per-call scope (see “Scope”), and can require scope on all calls when
ownershipMode: "strict"is enabled. - Multi-tenant setups: If you have tenants, keep a distinct
tenantId(or similar) alongside_owner._ownercontinues to model “who owns this record” rather than “which tenant it belongs to”.
Config
import { K2DB } from "@frogfish/k2db";
const db = new K2DB({
name: "mydb", // logical database name; instances with the same hosts/auth share one connection pool
hosts: [{ host: "cluster0.example.mongodb.net" }], // SRV if single host without port
user: process.env.DB_USER,
password: process.env.DB_PASS,
authSource: process.env.DB_AUTH_SOURCE, // optional (defaults to "admin" when user+password provided)
slowQueryMs: 300,
hooks: {
beforeQuery: (op, d) => {},
afterQuery: (op, d, ms) => {},
},
});
await db.init();
await db.ensureIndexes("myCollection");Scope config
ownershipMode?: "lax" | "strict"(default:"lax")"lax"(default): Passing a scope is optional. If you provide a scope, k2db enforces it as an owner filter; if omitted, behavior is unchanged from historical (no owner enforcement, but still enforces soft-delete)."strict": Scope is required on all scopified methods (see “Scope” below). If you omit scope, k2db throws an error. This helps prevent accidental missing owner filters.
Example config with strict mode:
const db = new K2DB({
name: "mydb",
hosts: [{ host: "cluster0.example.mongodb.net" }],
user: process.env.DB_USER,
password: process.env.DB_PASS,
ownershipMode: "strict",
});Aggregation config
aggregationMode?: "loose" | "guarded" | "strict"(default:"loose")"loose": No aggregation pipeline validation (all MongoDB stages permitted)."guarded": Denies$out,$merge,$function,$accumulator; requires a positive limit, caps max limit, and addsmaxTimeMS."strict": Allows only$match,$project,$sort,$skip,$limitstages; also requires and caps limit.
Example:
const db = new K2DB({
name: "mydb",
hosts: [{ host: "cluster0.example.mongodb.net" }],
aggregationMode: "guarded", // disables dangerous stages, enforces limit
});Scope
Scope is an optional per-call “owner filter” that k2db can apply to most data access and mutation methods. It provides a safety guardrail to help prevent missing _owner filters, especially in multi-tenant or user-data contexts.
Scope is not authentication or authorization: It does not decide who may act. Your API/service is still responsible for authenticating callers and deciding what scopes/owners they are allowed to access.
Type
type Scope = string | "*";- If a
stringis provided, only documents with_owner === scopeare included or affected. - If
"*"is provided, all owners are included (no owner filter). This is intended for admin/service-to-service operations—never pass untrusted"*"from user input.
Modes
- In
ownershipMode: "lax"(default):- If
scopeis omitted: No owner filtering is applied (historical behavior; still enforces soft-delete). - If
scopeis provided: Only docs with_owner === scope(or all docs if"*").
- If
- In
ownershipMode: "strict":scopeis required for all scopified methods (see below). If not provided, k2db throws an error.- Use
"*"for admin/system/service calls (do not invent a magic owner like_system).
Methods that support scope
The following methods accept an optional scope?: Scope | string as their final argument:
getfindOnefindcountupdateupdateAlldeletedeleteAllrestorepurgepurgeDeletedOlderThandropanddropDatabase: For destructive/admin operations, use"*"as scope to indicate you intend to operate without owner restriction.
Examples
a) User reads own docs
// Only docs owned by this user
await db.find("posts", {}, {}, 0, 20, userId);b) Admin/service reads or mutates all docs
// Read all posts (no owner restriction)
await db.find("posts", {}, {}, 0, 20, "*");
// Delete a user (admin only)
await db.delete("users", id, "*");c) Strict mode with HTTP header mapping
Suppose your API receives:
X-Scope: <userId> | *You should map this header to the scope argument:
const scope = req.headers["x-scope"];
// Only allow "*" for trusted admin/service credentials!
await db.find("posts", {}, {}, 0, 20, scope);Rules of thumb
- Never pass untrusted
"*"as a scope—restrict this to trusted admin/service credentials only. - Prefer passing a scope everywhere; enabling strict mode helps you catch missing owner filters.
- Normalize
_ownervalues consistently—prefer lowercase and a single convention (e.g., always userId as lowercase UUID).
Aggregation
Aggregation in k2db lets you run MongoDB pipelines with guardrails for soft-delete, secure fields, and pipeline safety.
aggregate(collection, pipeline, skip?, limit?)
Runs an aggregation pipeline on the given collection, returning an array of documents. k2db automatically injects filters and enforces restrictions for safety and consistency.
What k2db enforces automatically
- Soft-delete enforcement: Automatically inserts a
$match: { _deleted: { $ne: true } }stage near the start of your pipeline so only non-deleted documents are returned. For pipelines beginning with$search,$geoNear, or$vectorSearch, the filter is injected after the first stage to avoid breaking those operators. - Nested enforcement: For
$lookup,$unionWith,$graphLookup, and$facet, any sub-pipeline is rewritten to ensure the non-deleted filter applies to foreign collections as well. Simple$lookupwithlocalField/foreignFieldis rewritten to pipeline form so the non-deleted filter can be injected. - Pagination: If you pass
skip/limit, those are appended to the pipeline. In"guarded"/"strict"aggregationMode, a positive limit is required and capped to a safe maximum, andmaxTimeMSis set to prevent long-running queries. - Secure fields: If
secureFieldPrefixesis configured (e.g.["#"]), any pipeline referencing a secure-prefixed field (such as"#passport_number") is rejected, even in expressions. Returned documents are also stripped of any keys beginning with a secure prefix.
What you cannot do (by default)
- Return soft-deleted documents: The injected
$match: { _deleted: { $ne: true } }filter means aggregate will never return soft-deleted docs, regardless of your pipeline. - Use secure fields in aggregate: Any pipeline referencing a field like
"#passport_number"(including in$project,$addFields, or expressions) is rejected with an error. Even if a document contains a secure-prefixed field, it is stripped from the output. - Use dangerous or non-allowlisted stages: In
"guarded"mode, you cannot use$out,$merge,$function, or$accumulator. In"strict"mode, only$match,$project,$sort,$skip, and$limitare allowed.
Filtering level: root vs nested pipelines
- The root pipeline always gets the non-deleted filter injected (unless the first stage is
$search,$geoNear, or$vectorSearch, in which case it's injected after). - For nested pipelines in
$lookup,$unionWith,$graphLookup, and$facet, k2db rewrites or injects the non-deleted filter into each sub-pipeline, so deleted foreign documents are excluded. - If a
$lookupuses the simplelocalField/foreignFieldform, k2db rewrites it to a pipeline$lookupso filtering can be enforced.
Examples
1) Basic aggregation: injected soft-delete filter
Suppose you call:
const pipeline = [
{ $match: { status: "active" } },
{ $project: { name: 1, status: 1 } }
];
await db.aggregate("users", pipeline);Effective pipeline after k2db injection:
[
{ $match: { _deleted: { $ne: true } } }, // injected
{ $match: { status: "active" } },
{ $project: { name: 1, status: 1 } }
]2) $lookup rewritten for non-deleted foreign docs
Original pipeline:
[
{
$lookup: {
from: "orders",
localField: "_uuid",
foreignField: "user_id",
as: "orders"
}
}
]k2db rewrites this to:
[
{
$lookup: {
from: "orders",
let: { local_id: "$_uuid" },
pipeline: [
{ $match: { _deleted: { $ne: true } } }, // injected for foreign docs
{ $match: { $expr: { $eq: ["$user_id", "$$local_id"] } } }
],
as: "orders"
}
}
]3) Secure field reference is rejected
Pipeline:
[
{ $project: { name: 1, passport: "$#passport_number" } }
]Result: Throws an error — referencing a secure-prefixed field ("#passport_number") is not allowed in aggregate pipelines.
4) Attempting to aggregate deleted docs returns nothing
Pipeline:
[
{ $match: { _deleted: true } }
]Result: Returns no documents — k2db injects { _deleted: { $ne: true } } before your match, so the result set is always empty.
Note: k2db's aggregation guardrails ensure you cannot accidentally leak deleted or secure data, or run dangerous stages in stricter modes.
Secure fields and encryption at rest
k2db supports secure fields: fields whose keys start with a configurable prefix (recommended: #). Secure fields are designed to be hard to accidentally leak in bulk queries and hard to casually access in code.
This feature has two layers:
- Guardrails (always): Secure fields are stripped from multi-record reads (
find,aggregate) and cannot be explicitly projected. Aggregation pipelines are also rejected if they reference secure fields. - Encryption at rest (optional): When enabled, secure-field values are encrypted before being written to MongoDB and decrypted on single-record reads.
Why # (friction) instead of __somethingNice
The goal is to make secure fields “visually wrong” and ergonomically annoying to access on purpose:
doc["#passport_number"]is explicit and reviewable.doc.#passport_numberis impossible (forces bracket access).- It discourages lazy DTO copying and casual destructuring that can accidentally leak secrets.
Using __private looks “normal”, which increases the chance developers will treat it like any other field and accidentally return it in list endpoints.
Also, _... is reserved for k2db’s metadata (_uuid, _owner, _created, …), so #... cleanly avoids that namespace.
Configuration
Enable secure fields by setting secureFieldPrefixes. To also encrypt them at rest, provide a base64 AES-256 key and a key id.
const db = new K2DB({
name: "mydb",
hosts: [{ host: "cluster0.example.mongodb.net" }],
// Treat "#..." fields as secure
secureFieldPrefixes: ["#"],
// Optional encryption-at-rest for secure fields:
// - must decode to 32 bytes (AES-256)
// - values are stored as "<keyid>:<payload>"
secureFieldEncryptionKeyId: "k1",
secureFieldEncryptionKey: process.env.K2DB_SECURE_KEY_B64,
});Key requirements:
secureFieldEncryptionKeymust be base64 and decode to 32 bytes.- Encryption is enabled only when both
secureFieldEncryptionKeyandsecureFieldEncryptionKeyIdare provided.
Note: Today k2db decrypts only ciphertexts whose
keyidmatches the configuredsecureFieldEncryptionKeyId. If the storedkeyiddiffers, the encrypted string is returned as-is (useful during key rotation rollout, but plan your rotation strategy accordingly).
Storage format
When encryption is enabled, each secure field value is stored as a single string:
<keyid>:<ivB64>.<tagB64>.<ctB64>- Algorithm: AES-256-GCM
- Plaintext:
JSON.stringify(value)(so secure fields may be strings, numbers, objects, arrays, etc.)
Behavior by method
With secureFieldPrefixes: ["#"]:
- create / update / updateAll:
- If encryption is enabled,
#...values are encrypted before write. - If encryption is disabled, values are stored as provided.
- If encryption is enabled,
- get / findOne (single-record reads):
- If encryption is enabled,
#...values are decrypted and returned in the object. - If encryption is disabled, values are returned as stored.
- If encryption is enabled,
- find (multi-record reads):
- Secure fields are stripped from every returned document (even if encryption is disabled).
- aggregate:
- Secure fields are not allowed to be referenced in the pipeline (k2db throws).
- Secure fields are also stripped from returned documents as a second safety net.
- projections:
findOne(fields=[...])andfind(params.filter=[...])cannot include secure fields (throws).
Examples
1) Store a secure field
const owner = userId.toLowerCase();
const { id } = await db.create("profiles", owner, {
name: "Ada",
"#passport_number": "123456789",
"#home_address": { line1: "1 Example St", city: "Perth" },
});- If encryption is enabled, MongoDB stores
"#passport_number"and"#home_address"as encrypted strings. - If encryption is disabled, they are stored as plaintext (still guarded on reads).
2) Single-record read returns secure fields
const profile = await db.get("profiles", id, owner);
// Explicit access (friction by design)
console.log(profile["#passport_number"]);3) Multi-record read strips secure fields
const list = await db.find("profiles", {}, {}, 0, 50, owner);
// "#..." fields are removed from each document in list
console.log(list[0]["#passport_number"]); // undefined4) Aggregation cannot reference secure fields
await db.aggregate("profiles", [
{ $project: { name: 1, passport: "$#passport_number" } },
]);
// throws: secure-prefixed field referenced in pipelineWhat this is (and isn’t)
- This is a data safety guardrail and optional encryption-at-rest mechanism.
- It is not authentication/authorization: you still must decide who the caller is and what they’re allowed to do.
- For admin/service operations, combine this with Scope rules: do not accept
"*"from untrusted callers.
Environment loader
const conf = K2DB.fromEnv(); // K2DB_NAME (logical db), K2DB_HOSTS, K2DB_USER, K2DB_PASSWORD, K2DB_AUTH_SOURCE, K2DB_REPLICASET, K2DB_SLOW_MSTesting
If you run many test suites in a single Node process and want to fully tear down shared MongoDB pools between suites, you can use the test helper:
import { resetSharedMongoClientsForTests } from "@frogfish/k2db";
afterAll(async () => {
await resetSharedMongoClientsForTests();
});Tips
- Use
restore()to clear_deleted. - Use
purge()to hard-delete; only works on soft-deleted docs. - For aggregates with joins, the library automatically injects non-deleted filters in root and nested pipelines.
Versioning (optional)
- Per-document history is stored in a sibling collection named
<collection>__history. - Use
updateVersioned()to snapshot the previous state before updating. - Use
listVersions()to see available versions andrevertToVersion()to roll back (preserves metadata like_uuid,_owner,_created).
Example:
// Save previous version and keep up to 20 versions
await db.ensureHistoryIndexes("hello");
await db.updateVersioned("hello", id, { message: "Hello v2" }, false, 20);
// List latest 5 versions
const versions = await db.listVersions("hello", id, 0, 5);
// Revert to a specific version
await db.revertToVersion("hello", id, versions[0]._v);Further examples:
// Versioned replace
await db.updateVersioned("hello", id, { message: "replace payload" }, true);
// Keep only the most recent prior state (maxVersions = 1)
await db.updateVersioned("hello", id, { message: "v3" }, false, 1);MongoDB Atlas
- Create a Database User in Atlas and allow your IP under Network Access.
- Find your cluster address (looks like
cluster0.xxxxxx.mongodb.net). - Minimal config uses SRV (no port) when a single host is provided.
Example (direct config):
import { K2DB } from "@frogfish/k2db";
const db = new K2DB({
name: "mydb", // your database name
hosts: [{ host: "cluster0.xxxxxx.mongodb.net" }], // Atlas SRV host
user: process.env.DB_USER, // Atlas DB user
password: process.env.DB_PASS, // Atlas DB password
slowQueryMs: 300,
});
await db.init();Example (env-based):
export K2DB_NAME=mydb
export K2DB_HOSTS=cluster0.xxxxxx.mongodb.net
export K2DB_USER=your_user
export K2DB_PASSWORD=your_pass
export K2DB_AUTH_SOURCE=admin
node hello.mjs// hello.mjs (Node 18+, ESM)
import { K2DB } from "@frogfish/k2db";
const conf = K2DB.fromEnv();
const db = new K2DB(conf);
await db.init();Hello World
- Connect to Atlas, insert into
hellocollection, then read it back.
// hello-world.mjs
import { K2DB } from "@frogfish/k2db";
// Configure via env or inline config
const db = new K2DB({
name: "mydb",
hosts: [{ host: "cluster0.xxxxxx.mongodb.net" }],
user: process.env.DB_USER,
password: process.env.DB_PASS,
});
await db.init(); // safe to call multiple times; concurrent calls are deduped
await db.ensureIndexes("hello"); // unique _uuid among non-deleted, plus helpful indexes
// Create a document (owner is required)
const { id } = await db.create("hello", "demo-owner", { message: "Hello, world!" });
console.log("Inserted id:", id);
// Read it back
const doc = await db.get("hello", id); // excludes soft-deleted by default
console.log("Retrieved:", doc);
// Soft delete it (optional)
await db.delete("hello", id);
// Restore it (optional)
await db.restore("hello", { _uuid: id });
await db.release();Notes
- Use Node 18+ (preferably Node 20+) for ESM + JSON imports.
- Atlas SRV requires only the cluster hostname (no port); the client handles TLS and topology.
Updates
- Patch vs Replace
update(collection, id, data)patches by default using$set(fields you pass are updated, others remain).update(collection, id, data, true)replaces non‑metadata fields (PUT‑like). Metadata (_uuid,_owner,_created,_updated,_deleted) is preserved.- Underscore‑prefixed fields in your input are ignored;
_updatedis refreshed automatically.
Examples:
// Patch specific fields
await db.update("hello", id, { message: "patched value" });
// Replace (preserves metadata, overwrites non‑underscore fields)
await db.update("hello", id, { message: "entire new doc", count: 1 }, true);Schemas (optional, Zod)
- You can register a Zod schema per collection at runtime; it validates and (optionally) strips unknown fields on writes. Nothing is stored in DB.
- Modes:
strict(reject unknown fields),strip(remove unknown; default),passthrough(allow unknown).
Example:
import { z } from "zod";
// Define
const Hello = z
.object({
message: z.string(),
count: z.number().int().default(0),
})
.strip(); // default unknown-key behavior
// Register (in-memory for this instance)
db.setSchema("hello", Hello, { mode: "strip" });
// On create: full schema validation; on patch: partial validation
await db.create("hello", ownerId, { message: "hey", extra: "ignored" });
await db.update("hello", id, { count: 2 }); // partial OK
// To clear
db.clearSchema("hello");Type Reference (Cheat Sheet)
BaseDocument: Core shape enforced by the library; apps may extend.CreateResult:{ id: string }UpdateResult:{ updated: number }DeleteResult:{ deleted: number }RestoreResult:{ status: string; modified: number }CountResult:{ count: number }DropResult:{ status: string }PurgeResult:{ id: string }VersionedUpdateResult:{ updated: number; versionSaved: number }VersionInfo:{ _uuid: string; _v: number; _at: number }
Returns by method
get(collection, id, scope?):Promise<BaseDocument>find(collection, filter, params?, skip?, limit?, scope?):Promise<BaseDocument[]>findOne(collection, criteria, fields?, scope?):Promise<BaseDocument|null>aggregate(collection, pipeline, skip?, limit?):Promise<BaseDocument[]>create(collection, owner, data):Promise<CreateResult>update(collection, id, data, replace?, scope?):Promise<UpdateResult>updateAll(collection, criteria, values, scope?):Promise<UpdateResult>delete(collection, id, scope?):Promise<DeleteResult>deleteAll(collection, criteria, scope?):Promise<DeleteResult>purge(collection, id, scope?):Promise<PurgeResult>restore(collection, criteria, scope?):Promise<RestoreResult>count(collection, criteria, scope?):Promise<CountResult>drop(collection, scope?):Promise<DropResult>ensureIndexes(collection, opts?):Promise<void>ensureHistoryIndexes(collection):Promise<void>updateVersioned(collection, id, data, replace?, maxVersions?):Promise<VersionedUpdateResult[]>listVersions(collection, id, skip?, limit?):Promise<VersionInfo[]>revertToVersion(collection, id, version):Promise<UpdateResult>- Zod registry:
setSchema(collection, zodSchema, { mode }?):voidclearSchema(collection):voidclearSchemas():void
UUID
_uuid = Crockford Base32 encoded UUID V7, Uppercase, with hyphens
0J4F2-H6M8Q-7RX4V-9D3TN-8K2WZ
// Canonical uppercase form with hyphens Crockford 32 const CROCKFORD_ID_REGEX = /^[0-9A-HJKMNP-TV-Z]{5}-[0-9A-HJKMNP-TV-Z]{5}-[0-9A-HJKMNP-TV-Z]{5}-[0-9A-HJKMNP-TV-Z]{5}-[0-9A-HJKMNP-TV-Z]{6}$/;
// Example usage: const id = "0J4F2-H6M8Q-7RX4V-9D3TN-8K2WZ"; console.log(CROCKFORD_ID_REGEX.test(id)); // true
Usage examples:
import { isK2ID, K2DB } from '@frogfish/k2db' isK2ID('01HZY2AB-3JKM-4NPQ-5RST-6VWXYZ')
