stealthql
v0.2.13
Published
Repo-native backend capsule for Next.js: local database, auth, policies, shares, and one-command deployment.
Maintainers
Readme
StealthQL
StealthQL is a repo-native backend capsule for apps where the security boundary matters. Same database, different actor: support can be blind, admins can approve, members can only touch their rows, external users can work through scoped share portals, and the ledger records what happened without starting from a hosted dashboard.
For agent/MCP usage, see docs/MCP.md.
The fastest path:
npm install stealthql
npx stealthql setup next
npm run dev:stealthFor TypeScript-first projects or empty Next apps, force TypeScript output:
npx stealthql setup next --typescriptThen open the generated demo page and start editing the capsule files in your repo:
stealth.schema.jsdefines tables.stealth.auth.jsdefines local actors, policies, field masks, and data visibility contracts.stealth.functions.jsdefines intent functions.stealth.storage.jsandstealth.sync.jsdefine materialization rules.stealth.shares.jsdefines table and record shares.stealth.compliance.jsmaps tables, shares, storage, functions, and data classes to compliance evidence controls.stealth.database.jsdeclares preview reviewed SQL migrations, triggers, and procedures as hash-tracked deploy artifacts.stealth.jobs.jsdeclares preview scheduled jobs that run named functions as service actors.stealth.seeds.jsgives the local runtime real starting state.stealth.policy.test.jsdeclares authorization attack tests.
What You Get
- Database in seconds:
npx stealthql setup nextscaffolds a SaaS-shaped database/auth capsule and local PGlite materialization. - LLM-friendly schema setup: Cursor, Codex, and Claude Code can edit
stealth.schema.request.json, preview it, apply it, and run policy/security tests. - Built-in auth model: local actors, sessions, magic links, Google/GitHub/Facebook OAuth, recovery codes, org membership, service accounts, and share-token actors use one normalized actor model.
- Policy-bound row handles: browser/mobile update and delete flows use short-lived action handles instead of raw IDs by default.
- Google-Sheets-style shares: table/record shares support field masks, proposal mode, CSV round trip, expiration, revocation, and audit logs.
- Preview automation surface: reviewed SQL artifacts are hash-tracked in the capsule, and scheduled jobs run declared functions as service actors while live DB application and external scheduler wiring stay deploy-adapter responsibilities.
- Local-first dev loop: schema/auth/policy/function changes hot-reload without resetting local data.
- Production options: deploy the website and capsule on one DigitalOcean Droplet, or run the Next.js frontend on Vercel with the capsule runtime on DigitalOcean.
Deploy
Friendly deploy menu:
npx stealthql deployChoose:
1. DigitalOcean only: website + capsule runtime on one Droplet
2. Vercel frontend + DigitalOcean capsule runtimeDirect commands:
npx stealthql deploy single --url https://app.example.com --repo https://github.com/you/app.git --write
npx stealthql deploy split --frontend-url https://app.example.com --backend-url https://api.example.com --writeLocal Commands
npm run setup:next
npm run build:capsule
npm run export
npm run bench
npm run stress
npm run test:policies
node .\bin\stealthql.js test security --smoke
node .\bin\stealthql.js test security
npm run query:projects
npm run devPublic npm Install
Package page:
https://www.npmjs.com/package/stealthqlInstall into an existing app:
npm install stealthql
npx stealthql initFor an existing Next.js app, use the one-command setup:
npx stealthql setup nextBoth init and setup next write the LLM handoff files into the project:
STEALTHQL_AGENT.md
STEALTHQL_SCHEMA_PROMPT.md
stealth.schema.request.json
.cursor/rules/stealthql.mdcGive STEALTHQL_AGENT.md or .cursor/rules/stealthql.mdc to Cursor, Codex, or Claude Code so the agent knows this is a StealthQL capsule project, not GraphQL, Prisma, Firebase, or Supabase.
They also create stealth.project.json, a non-secret stable project identity. Keep it with the project so local runtime data does not move just because you rename the app or folder. Private database files, ledgers, auth state, and storage blobs still live outside the repo by default and .stealthql/ remains ignored.
The dev server prints the local dashboard URL when it starts:
npx stealthql devIt tries 8787 when that port is free. In default or auto mode, if 8787 is busy, StealthQL scans upward and uses 8788, 8789, and so on before falling back to an OS-assigned port. If you explicitly pass --port 8787, that port is treated as required unless you also pass --auto-port. The chosen port is written to .stealthql/runtime.json. If a killed Windows process leaves a materializer lock behind, stealthql dev reclaims dead-PID locks automatically; use stealthql doctor unlock --force or stealthql dev --force-unlock only after confirming the old runtime is stopped.
When a port is busy, do not kill the occupying process unless you know it belongs to the same project. Prefer npx stealthql dev --auto-port or npx stealthql dev --port auto, then let the app proxy read .stealthql/runtime.json or the helper-provided STEALTHQL_URL.
During development, stealthql dev watches capsule source files and hot-reloads schema, auth, policies, functions, storage, sync, shares, compliance mappings, database/job declarations, and seeds without resetting local data. Use --no-watch to disable this.
CLI
node .\bin\stealthql.js init
node .\bin\stealthql.js setup next [--typescript]
node .\bin\stealthql.js schema spec
node .\bin\stealthql.js schema init
node .\bin\stealthql.js schema preview --file stealth.schema.request.json
node .\bin\stealthql.js schema apply --file stealth.schema.request.json --test
node .\bin\stealthql.js capsule build
node .\bin\stealthql.js export --out stealthql-export
node .\bin\stealthql.js bench --sizes 1000,10000
node .\bin\stealthql.js stress --operations 500 --concurrency 64 --http 250
node .\bin\stealthql.js dev --port 8787
node .\bin\stealthql.js doctor unlock --force
node .\bin\stealthql.js query projects --actor alice --format table
node .\bin\stealthql.js query invoices --actor bob --as-of 2026-05-01T14:00:00Z --format table
node .\bin\stealthql.js sql "select * from projects where orgId = 'org_acme';" --actor alice --format table
node .\bin\stealthql.js sql "insert into projects (id, orgId, ownerId, name, visibility) values ('proj_sql', 'org_acme', 'user_alice', 'SQL app', 'private');" --actor alice
node .\bin\stealthql.js mutate projects insert --actor alice --input '{ "id": "proj_new", "orgId": "org_acme", "ownerId": "user_alice", "name": "New app" }'
node .\bin\stealthql.js call createProject --actor alice --input '{ "name": "Capsule CRM" }'
node .\bin\stealthql.js database plan
node .\bin\stealthql.js jobs list
node .\bin\stealthql.js jobs run nightlyCleanup --dry-run
node .\bin\stealthql.js test policies
node .\bin\stealthql.js test security --smoke
node .\bin\stealthql.js test security
node .\bin\stealthql.js replay latest
node .\bin\stealthql.js sovereignty
node .\bin\stealthql.js audit-report --framework soc2-cc7 --out evidence/soc2-cc7
node .\bin\stealthql.js snapshot create --as-of 2026-05-01T14:00:00Z --out snapshots/bug.stealthql-snapshot.json.gz
node .\bin\stealthql.js snapshot restore snapshots/bug.stealthql-snapshot.json.gz --out tmp/repro30-Second Next.js Setup
Inside an existing Next.js app:
npx stealthql setup next
npm install
npm run dev:stealthThat command creates a real SaaS-shaped backend capsule, compiles it, materializes the local database, wires a same-origin Next API proxy, and adds a demo page at:
http://localhost:3000/stealthql-demoThe default saas preset creates:
organizations
users
memberships
projects
invoicesIt also configures local auth actors, sessions, magic links, recovery codes, invites, test users, roles, organization membership, generated policy tests, sync shapes, and share demos. The setup report prints the tables and auth features so the developer can see what exists immediately.
Use it from React code with either SQL or mutation JSON. The generated lib/stealthql.* file exports typed local demo actors so alice / bob typos do not become silent string bugs in TypeScript projects:
import { createStealthClient, stealthActors } from "./lib/stealthql";
const stealth = createStealthClient({ actor: stealthActors.alice });
await stealth.sql("select id, name from projects;");
await stealth.mutate("projects", "insert", {
id: "proj_new",
orgId: "org_acme",
ownerId: "user_alice",
name: "New app"
});Local actor strings are demo-only. Production code should use real sessions, bearer tokens, service tokens, or share tokens instead of hardcoded actors.
The generated proxy keeps browser calls same-origin at /api/stealthql/*, so a Next app does not need to think about CORS, runtime ports, or backend credentials during local development. In production, the same proxy can run on Vercel while STEALTHQL_URL points server-side to a capsule runtime on DigitalOcean. TypeScript projects receive .ts / .tsx integration files, and the package ships declarations for stealthql, stealthql/next/client, stealthql/next/server, and the backwards-compatible stealthql/next entry, so do not add broad declare module "*.js" workarounds.
If a project has no Next config yet, setup next writes a minimal next.config.mjs with allowedDevOrigins: ["127.0.0.1", "localhost"]. Existing configs are not rewritten; add that setting manually if Next.js dev warns about cross-origin local resources from 127.0.0.1.
Public typed imports:
stealthql root exports
stealthql/next/client browser-safe Next.js client helper
stealthql/next/server Next.js server client and proxy helpers
stealthql/next compatibility entry that re-exports both sides
stealthql/client generic browser, mobile, Vite, Expo, SvelteKit, Astro, Remix client helper
stealthql/react-native React Native / Expo client helper for absolute runtime URLs
stealthql/server generic server and proxy helpers
stealthql/vite Vite dev-server proxy adapter
stealthql/sveltekit SvelteKit request-handler proxy adapter
stealthql/astro Astro endpoint proxy adapter
stealthql/schema typed capsule authoring helpers such as defineSchema and defineAuth
stealthql/types shared TypeScript types onlyFor Vite/SvelteKit/Astro details, including stealthVite() and .stealthql/runtime.json port discovery, see docs/VITE_INTEGRATION.md. Vite adapter runtimeFile paths are resolved relative to the Vite config file first, with process.cwd() kept as a fallback. In Astro and SvelteKit dev mode, stealthVite() is usually the right primitive; the Astro/SvelteKit handler exports are for explicit framework endpoint routes.
React Native / Expo
React Native does not have a same-origin browser proxy, so use an absolute runtime URL and authenticate with a bearer/session token instead of raw local demo actors:
import { createStealthReactNativeClient } from "stealthql/react-native";
const stealth = createStealthReactNativeClient({
baseUrl: "https://api.example.com",
token: session.accessToken,
});
const { rows } = await stealth.query("projects");Local emulator URLs usually differ by platform:
// Android emulator -> host machine
createStealthReactNativeClient({ baseUrl: "http://10.0.2.2:8787" });
// iOS simulator -> host machine
createStealthReactNativeClient({ baseUrl: "http://127.0.0.1:8787" });If a mobile app talks through a Next/Vercel proxy instead of the capsule runtime directly, pass that proxy path explicitly:
createStealthReactNativeClient({
baseUrl: "https://app.example.com",
basePath: "/api/stealthql",
token: session.accessToken,
});When token or getToken is configured and no explicit actor is passed, the React Native client omits raw actor selectors from query strings and request bodies. actor: "alice" remains local-demo-only.
For typed capsule files or better IntelliSense in .mjs capsule files:
import { defineSchema } from "stealthql/schema";
export default defineSchema({
name: "my-app",
tables: {
projects: {
columns: {
id: { type: "text", primary: true },
name: { type: "text", required: true }
},
indexes: [["name"]],
searchIndexes: {
default: ["name"]
}
}
}
});Data Integrity
StealthQL validates and canonicalizes mutation inputs from the compiled schema before policy evaluation and before data reaches the materializer. It does not require Zod and does not silently strip arbitrary user prose.
Defaults are conservative:
- text is Unicode-normalized to NFC and unsafe control/bidirectional characters are rejected
- normal
textfields are trimmed;longTextpreserves surrounding whitespace unlesstrim: true statusvalues are trimmed and lowercasedemailvalues are trimmed, lowercased, length-limited, and shape-checkedurlvalues must be HTTP(S), cannot include credentials, and are canonicalized withURLinteger,number, andmoneyaccept safe integers or integer strings and store numbersbooleanandcheckboxaccept booleans or common form strings liketrue,false,yes, andnoenum,minLength,maxLength,trim,emptyAsNull, andcaseare honored per column
Example:
columns: {
email: { type: "email", required: true },
status: { type: "status", default: "open", enum: ["open", "paid", "archived"] },
notes: { type: "longText", trim: false, maxLength: 20000 },
externalUrl: { type: "url", emptyAsNull: true }
}Data Placement Contracts
dataVisibility can also say where a field or table is allowed to exist, not only who may read it. Existing cloudReadable, supportReadable, and aiReadable contracts still work; new projects should add mayExistOn for explicit placement rules.
dataVisibility: {
"users.email": {
stored: "hosted-or-local",
encryptedBy: "runtime",
cloudReadable: true,
supportReadable: false,
aiReadable: false,
mayExistOn: {
server: true,
mobileDevice: true,
browserCache: false,
aiPrompt: false,
supportExport: false,
evidencePacket: true,
externalShare: false
}
}
}Supported placement keys are server, mobileDevice, browserCache, aiPrompt, supportExport, evidencePacket, and externalShare. Explicit false values are enforced by placement-aware response filtering and redaction: browser/Next/Vite proxy reads are treated as browserCache, React Native reads are treated as mobileDevice, server/service clients are treated as server, AI ports use aiPrompt, shares use externalShare, and ledger evidence uses evidencePacket. Omitted keys preserve existing behavior for older apps.
AI Ports
Backend functions should call AI through ctx.ports.ai.*, not raw provider fetches:
export async function summarizeInvoice(ctx, input) {
const invoices = await ctx.query("invoices");
const invoice = invoices.find((row) => row.id === input.invoiceId);
return ctx.ports.ai.complete({
providerClass: "hostedCloud",
prompt: `Summarize invoice ${invoice.id}: ${invoice.internalNote}`,
sources: [{ table: "invoices", row: invoice }]
});
}V1 is the governance surface: it validates, redacts/blocks, and ledger-records AI calls. Live provider adapters can be wired behind this port later without changing function code.
If invoices.internalNote is marked aiReadable: false, mayExistOn.aiPrompt: false, or aiReadable.hostedCloud: false, the hosted-cloud AI call is blocked and the attempted violation is written to the hash-chained ledger without logging the raw prompt.
Provider classes are localDevice, privateCloud, hostedCloud, vendorAgent, and support. Use an aiReadable matrix when local/on-device AI is allowed but cloud AI is not:
dataVisibility: {
"invoices.internalNote": {
aiReadable: {
localDevice: true,
hostedCloud: false,
vendorAgent: false,
support: false,
modelTraining: false,
redactedExtract: true
}
}
}When redactedExtract: true is allowed and the function opts in with redactedExtract: true, the AI port redacts restricted source fields and matching prompt text before recording/simulating the call.
Agent Runtime Foundation
The MCP server is a separate product, but the runtime now has the complete substrate it needs:
auth.agentAccountsdeclares durable agent identities distinct from humans, services, and share principals.POST /_stealthql/auth/agent-sessionmints short-livedagent_...bearer sessions for an authorized issuer.POST /_stealthql/auth/agent-session/revokerevokes one live session or all sessions for an agent.- Agent actors carry
agentName,agentSessionId,taskId,scopeShareId,trustClass,keyId, and public-key fingerprint metadata into policy decisions and ledger events. GET /_stealthql/scope/describereturns the policy-scoped surface an actor can see: tables, fields, shares, functions, AI-port placement rules, and rate limits.POST /_stealthql/memory/recalland/forgetexpose an agent-filtered ledger memory view with tombstones.stealth.mutate(..., { dryRun: true, mcpToolName })records a dry-run decision without changing materialized state.- Agent sessions are per-minute rate-limited by account config and rate-limit events are ledger-recorded.
Example:
export default {
agentAccounts: {
claudeSupport: {
id: "agent_claude_support",
issuer: "user_alice",
scopeShareId: "acmeInvoicesForBob",
trustClass: "hostedCloud",
keyId: "agent_claude_support.v1",
publicKey: process.env.STEALTHQL_AGENT_CLAUDE_PUBLIC_KEY,
sessionTtlMinutes: 60,
maxConcurrent: 3,
rateLimits: {
queriesPerMinute: 120,
mutationsPerMinute: 30,
functionsPerMinute: 30,
aiCallsPerMinute: 60
},
scopes: ["agent:read"]
}
}
};Service actors with agent:issue, agent:write, or provisioning:write, platform admins, or the configured issuer can create agent sessions. Agent accounts should be scoped with shares and normal policies; do not give MCP agents broad service tokens.
Agent authority is session-backed. agent:<name> is not a usable raw actor selector, and plain JSON objects like { type: "agent" } are rejected unless they came from a verified live agent session inside the runtime. If an agent session is revoked or expires, old actor objects stop authorizing access too.
For user-facing search on large tables, use search indexes instead of broad LIKE scans:
const results = await stealth.search("projects", "capsule", {
limit: 25,
columns: ["id", "name"]
});Every capsule build also emits schema-specific client types:
.stealthql/client.d.tsWhen that file is included by TypeScript, normal stealthql/next/client, stealthql/next, and stealthql/client imports become schema-aware. Wrong table names, unknown mutation fields, missing required insert fields, and wrong scalar types fail at type-check time. If your tsconfig.json has a custom include, add:
{
"include": ["**/*.ts", "**/*.tsx", ".stealthql/**/*.d.ts"]
}IDs are opaque by default. If an insert omits a text id, StealthQL generates a prefix_<random> ID from crypto.randomUUID(), not a sequential number. That reduces casual enumeration risk, but it is not the authorization boundary: IDOR prevention still comes from actor resolution plus table, field, share, and storage policies.
StealthQL also issues policy-bound row handles on query results. Treat id as identity and handle as authority:
const { rows } = await stealth.query("projects");
await stealth.mutate("projects", "update", {
handle: rows[0].$handles.update,
patch: {
visibility: "public"
}
});Rows include $key for UI/cache identity, $handles.read, $handles.update, and $handles.delete for action-specific capabilities, plus legacy $handle as an update-handle alias for older clients. Handles are sealed with authenticated encryption, short-lived, and bound to the table, row id, actor id, session when available, active organization, capsule hash, policy hash, row authorization hash, action, and allowed update fields. Destructive handles are single-use, handle-carrying runtime responses are sent with Cache-Control: no-store, and handles/auth-like tokens are redacted from ledger events. In production mode, normal user update/delete mutations require a row handle and still re-check policy before applying the mutation.
Raw-id update/delete is reserved for local development, explicit service actors, or a deliberate authConfig.capabilities.allowRawIdMutations escape hatch. The client exposes that escape hatch under an intentionally noisy API:
await stealth.unsafe.mutateById("projects", "delete", {
id: "proj_123"
});Use that only in trusted server code. Browser and mobile user flows should mutate by handle.
For webhooks, signup provisioning, billing callbacks, cron jobs, and background workers, do not use local demo actors like alice. Define a service account and use a service token:
// stealth.auth.js
export default {
serviceAccounts: {
system: {
id: "svc_system",
tokenEnv: "STEALTHQL_SERVICE_TOKEN",
scopes: ["provisioning:write", "billing:write", "share:issue"]
}
}
};import { createStealthServiceClient } from "stealthql/server";
const stealth = createStealthServiceClient({
token: process.env.STEALTHQL_SERVICE_TOKEN,
serviceAccount: "system"
});serviceAccount is the service actor hint, for example "roofpo_service" becomes service:roofpo_service. The bearer token remains the authority, but the hint keeps server-side URLs and request bodies from falling back to actor=anonymous.
Use stealth.functions.* for actions that need to touch multiple rows or cross role boundaries. For example, a user-facing action can insert an audit/event row as the user, while the function performs a protected aggregate update through ctx.mutate under the function's policy. Do not split those workflows into unrelated browser mutations with different actors.
Service actors are for trusted system-originated work: Stripe/Twilio webhooks after signature verification, signup provisioning, cron jobs, migrations, and internal reconciliation. Do not accept a member-supplied PATCH body and then write it through createStealthServiceClient(). Human-originated writes should run as the human actor, usually with row handles, so table and field policies still apply. npx stealthql test security includes a static confused-deputy check for Next route files that parse request bodies and then perform service-client mutations; add stealthql-confused-deputy-ok only to routes that receive trusted system input, not browser/member data.
Database Artifacts And Jobs
Enterprise SQL belongs in stealth.database.* as reviewed .sql files, not as ad hoc strings in app routes. This is a preview deploy surface: the compiler validates that migration, procedure, and trigger files are relative project .sql paths, rejects traversal/private directories, blocks psql shell/include commands, requires explicit unsafe acknowledgement for privileged SQL features, and hashes every SQL artifact into the capsule. The JS runtime does not blindly execute stored procedures; deploy/runtime adapters can apply the reviewed artifacts using the capsule hashes.
Scheduled jobs live in stealth.jobs.* as a preview scheduler contract. A job must reference an exported stealth.functions.* handler and a declared service:<name> actor. Local actors like alice or arbitrary actor objects are rejected at compile time so a cron path cannot become user input with service authority. Run declared jobs manually or from a trusted scheduler with:
npx stealthql jobs list
npx stealthql jobs run nightlyCleanup --dry-runGenerated custom-table update policies are owner-or-admin by default. UI hiding is not authorization: if a role like crew, client, viewer, or subcontractor should not create or edit a domain table such as jobs, documents, contracts, payments, or customer records, tighten stealth.auth.* table/field policies and add negative policy tests for that member session.
Service actors are the only intended way for trusted server code to perform explicit raw-id system mutations in production.
Zod
StealthQL does not require Zod. If an app already uses it, keep StealthQL as the source of truth and bind Zod schemas to generated capsule types:
import { z } from "zod";
import type { InsertInput } from "./.stealthql/client.js";
const projectInsertSchema = z.object({
orgId: z.string(),
ownerId: z.string(),
name: z.string(),
visibility: z.string().optional()
}) satisfies z.ZodType<InsertInput<"projects">>;That gives form/request validation without creating a second schema authority.
For Server Components, server actions, or API routes, use the server client and forward request headers so StealthQL resolves the actor from a real session cookie, bearer token, or share token. Do not hardcode actor: "alice" in deployed routes.
import { headers } from "next/headers";
import { createStealthServerClient } from "stealthql/next/server";
const stealth = createStealthServerClient({
headers: await headers()
});
const projects = await stealth.query("projects");Multi-Business Auth
Email is an identity credential, not the tenant boundary. A single email can belong to more than one business, so session actors include activeOrgId, activeMembership, membership-scoped roles, and separate platformRoles.
Normal tenant policies should use:
{ eq: ["row.orgId", "actor.activeOrgId"] }Read that as: allow the row when row.orgId equals the actor's currently selected organization. The policy DSL is JSON-shaped so agents and tests can compile it, but each clause maps to plain authorization language: eq means equals, includes means array membership, all means every child rule must pass, and any means at least one child rule must pass.
Normal tenant-admin policies should check the active membership, not a global role:
{ includes: ["actor.activeMembership.roles", "admin"] }Membership Contract
StealthQL apps should treat membership as a first-class auth contract, not an app-specific pile of profile columns. The generated SaaS capsule declares this contract in authConfig.memberContract and maps it onto the generated users and memberships tables.
The baseline contract covers identity, lifecycle, roles, and recovery factors:
authConfig: {
memberContract: {
userTable: "users",
membershipTable: "memberships",
fields: {
id: "id",
email: "email",
phone: "phone",
name: "name",
status: "status",
signedUpAt: "signedUpAt",
emailVerifiedAt: "emailVerifiedAt",
phoneVerifiedAt: "phoneVerifiedAt",
twoFactorRequired: "twoFactorRequired",
suspendedAt: "suspendedAt",
disabledAt: "disabledAt"
},
membershipFields: {
userId: "userId",
orgId: "orgId",
role: "role",
status: "status",
joinedAt: "joinedAt",
invitedAt: "invitedAt",
suspendedAt: "suspendedAt",
disabledAt: "disabledAt",
twoFactorRequired: "twoFactorRequired"
},
statuses: {
invited: "invited",
active: "active",
suspended: "suspended",
disabled: "disabled",
deleted: "deleted"
},
factors: ["magic", "sms", "totp"],
requiredFactors: 2
}
}Runtime actor snapshots use the same shape: email, phone, status, signedUpAt, emailVerifiedAt, phoneVerifiedAt, twoFactorRequired, suspendedAt, disabledAt, plus active-membership role, status, joinedAt, and twoFactorRequired. Suspended, disabled, deleted, banned, or locked accounts cannot mint new sessions. Suspended, disabled, removed, revoked, or inactive active memberships cannot mint a session for that organization.
If twoFactorRequired is true on the user or active membership, a single magic link is not enough to create a session. Magic-link verification returns requiresSecondFactor: true; the UI should collect another factor and call stealth.auth.verifyRecoveryFactors(...).
Magic links are for existing registered actors. A public login route should look up the submitted email against its member/user contract first, return the same generic UI response for unknown emails, and only ask StealthQL to issue a link for a known actor. Do not turn arbitrary submitted emails into actor objects for magic-link issuance. Signup is a separate provisioning flow: create/provision the actor first, then send a link to that registered actor.
OAuth providers are also resolved into the same member contract. Enable providers in stealth.auth.js, configure exact callback/redirect origins, and set provider secrets in server-only env vars:
authConfig: {
hostedAuth: {
magicLink: true,
oauth: ["google", "github", "facebook"]
},
production: {
allowedOrigins: ["https://app.example.com"],
allowedRedirectOrigins: ["https://app.example.com"]
},
oauth: {
allowedCallbackOrigins: ["https://api.example.com"]
}
}STEALTHQL_OAUTH_GOOGLE_CLIENT_ID=...
STEALTHQL_OAUTH_GOOGLE_CLIENT_SECRET=...
STEALTHQL_OAUTH_GITHUB_CLIENT_ID=...
STEALTHQL_OAUTH_GITHUB_CLIENT_SECRET=...
STEALTHQL_OAUTH_FACEBOOK_CLIENT_ID=...
STEALTHQL_OAUTH_FACEBOOK_CLIENT_SECRET=...App code starts the provider redirect through the runtime:
const started = await stealth.auth.startOAuth("google", {
redirectTo: "/dashboard",
orgId: "org_acme"
});
window.location.href = started.url;The runtime owns /_stealthql/auth/oauth/:provider/start and /_stealthql/auth/oauth/:provider/callback, uses server-side authorization-code exchange, one-time state, PKCE where supported, and sets the normal stealth_session cookie after provider verification. OAuth does not silently create members: unknown emails are denied until the app explicitly provisions the actor. Provider identities are linked by provider subject hash; verified email is only used to find an existing member on first link. Facebook does not expose the same verified-email signal as Google/GitHub in this implementation, so email auto-linking stays off for Facebook unless you explicitly set authConfig.oauth.providers.facebook.allowUnverifiedEmailLink = true.
When a magic-link email maps to multiple businesses, verification returns requiresOrgSelection: true until the UI supplies an org:
const pending = await stealth.auth.verifyMagicLink(token);
if (pending.requiresOrgSelection) {
await stealth.auth.verifyMagicLink(token, { orgId: "org_acme" });
}If a user cannot receive email, use a one-time recovery code instead of adding passwords or raw actor impersonation. Recovery codes are hashed at rest, short-lived, and consumed once. They can be issued by the signed-in user for themselves, an active org admin for a member in that org, local dev, or a trusted service account with provisioning:write:
const code = await stealth.auth.createRecoveryCode({ actor: "bob" });
await stealth.auth.verifyRecoveryCode(code.token, { orgId: "org_beta" });For stronger account recovery, use two independent factors. StealthQL supports magic-link tokens, SMS codes, and TOTP codes from Google Authenticator-compatible apps. Recovery only mints a session after the configured number of distinct factors verify, defaulting to two:
const sms = await stealth.auth.requestSmsCode({ actor: "bob" });
const totp = await stealth.auth.createTotpEnrollment({ actor: "bob" });
await stealth.auth.verifyTotpEnrollment({
enrollmentId: totp.enrollmentId,
code: "123456"
});
await stealth.auth.verifyRecoveryFactors({
actor: "bob",
smsToken: sms.token,
totpCode: "123456"
});Production SMS delivery uses the same locked-down port model as email/webhooks. Configure STEALTHQL_SMS_WEBHOOK_URL and optionally STEALTHQL_SMS_WEBHOOK_ALLOWED_HOSTS; without a live URL, SMS sends are simulated or fail closed without exposing the runtime to arbitrary outbound URLs.
In production mode, local actor impersonation stays disabled. If a trusted server route needs to provision a user session, create a magic link, or create a recovery code, use a service account with provisioning:write and call the auth endpoint through createStealthServiceClient() or an Authorization: Bearer <service-token> header. Anonymous callers and normal session tokens cannot bypass the local auth gate.
To re-run the database/auth setup later:
npm run stealth:setup
npm run stealth:teststealthql schema apply --test preserves local data by default and runs tests against an isolated temporary materializer. Use npm run stealth:reset, stealthql dev --reset, or schema apply --reset only when you explicitly want to discard local data and rebuild from seeds.
Database Wizard
Run the local runtime and open:
http://127.0.0.1:<printed-port>/wizardThe wizard supports templates and custom table setup. A non-technical user can enter table names, column names, and friendly field types like short text, long text, email, money, status, checkbox, date, and URL. StealthQL sanitizes those names into safe database identifiers, adds org-scoped auth by default, generates tenant policies, masks private fields, creates or preserves seed data, writes policy tests, backs up replaced capsule files, rebuilds the capsule, and hot-reloads the local materializer without discarding existing rows.
Built-in wizard templates:
SaaS
CRM
Marketplace
ContentAI Schema Import
Cursor, Codex, Claude Code, and other coding agents can design the schema through a machine-readable request file instead of hand-editing every capsule file.
npx stealthql schema spec
npx stealthql schema init
npx stealthql schema prompt --idea "A client portal for invoices, projects, documents, and approvals" --out STEALTHQL_SCHEMA_PROMPT.md
npx stealthql schema preview --file stealth.schema.request.json
npx stealthql schema apply --file stealth.schema.request.json --testschema init writes:
STEALTHQL_AGENT.md
.cursor/rules/stealthql.mdc
stealth.schema.request.jsonSTEALTHQL_AGENT.md explains the StealthQL architecture to an LLM. The Cursor rule makes Cursor stop looking for GraphQL, Apollo, Prisma, Supabase, or hosted DB endpoints and use stealthql/next/client for browser code plus stealthql/next/server for server code instead.
schema prompt writes a copy/paste-ready LLM task. Give STEALTHQL_SCHEMA_PROMPT.md to Cursor, Codex, Claude Code, or another coding agent; it should write stealth.schema.request.json, then run the preview/apply commands.
The package ships a sample at examples/client-portal.schema.request.json. Keep stealth.schema.request.json in an app root only when it is the active requested schema; stale sample requests confuse agents because the applied capsule source is stealth.schema.*.
An agent can change stealth.schema.request.json like this:
{
"format": "stealthql.schema-request",
"version": 1,
"mode": "custom",
"appName": "Client Portal",
"tables": [
{
"name": "Clients",
"columns": [
{ "name": "Name", "type": "text", "required": true, "indexed": true },
{ "name": "Email", "type": "email", "private": true },
{ "name": "Status", "type": "status" }
]
}
],
"serviceAccounts": [
{
"name": "billing_worker",
"scopes": ["provisioning:write", "clients:read"],
"memberships": ["all"]
}
]
}Applying that request generates the schema, auth model, policies, private-field redaction, seeds, sync shapes, shares, and policy tests. The --test flag immediately runs policy and security checks so the agent can iterate without leaving the editor.
Local auth emulator endpoints are available immediately in dev:
POST /_stealthql/auth/sign-in-as
POST /_stealthql/auth/magic-link
POST /_stealthql/auth/verify-magic-link
POST /_stealthql/auth/recovery-code
POST /_stealthql/auth/verify-recovery-code
POST /_stealthql/auth/sms-code
POST /_stealthql/auth/totp/enroll
POST /_stealthql/auth/totp/confirm
POST /_stealthql/auth/recovery-factors
GET /_stealthql/auth/session
GET /_stealthql/auth/actor
POST /_stealthql/auth/sign-out
POST /_stealthql/auth/test-users
POST /_stealthql/auth/invites
POST /_stealthql/auth/invites/accept
POST /_stealthql/auth/share-tokenLocal share pages are available at:
http://127.0.0.1:<printed-port>/share/acmeInvoicesForBob?actor=bob
http://127.0.0.1:<printed-port>/share/acmeInvoicePageForBob?actor=bobShares are delegated owner grants. The owner must be allowed to create/read the shared resource, then the recipient receives only the share-scoped fields, filters, grants, expiration, revocation state, and proposal permissions. A share does not give the recipient the owner's whole org membership.
Shares can also become signable portals. Add a sign grant and a actions.sign contract-signature action to capture a signer name, explicit consent, and a drawn signature. The signature image is accepted as base64 input, decoded and validated as PNG/JPEG/WebP, stored in a policy-bound storage bucket, hashed, and referenced from a signed packet. StealthQL records the document hash, packet hash, capsule hash, share id, row id, signer identity, consent text, signature image hash, storage object ids, timestamp, and hashed request evidence in the ledger. Do not model signatures as base64 columns. Product copy should say this captures electronic signatures with an audit trail; do not claim every signed packet is automatically legally binding in every jurisdiction. When lockAfterSign is true, the row stays locked across dev sessions because signature state is runtime state, not page state.
If consentText or intentText is omitted, the generated local portal shows conservative default copy. Set either field to a string to override it, or set it to null to suppress that default paragraph in generated portal HTML. The requires array controls validation; suppressing display text does not remove the requirement for explicit consent.
actions: {
sign: {
type: "contract-signature",
signatureBucket: "signatures",
documentTemplate: "work-order-v1",
documentVersion: "1",
requires: ["fullName", "consent", "signatureImage"],
lockAfterSign: true
}
}Shares can also opt into spreadsheet round trips:
csvRoundTrip: trueThat enables CSV export, editing in Google Sheets or Excel, and reupload as proposal diffs. The runtime validates the CSV header, row ids, visible fields, editable fields, schema types, Unicode constraints, share expiration/revocation, and actor policy before it writes proposal events. It never performs direct spreadsheet bulk writes.
GET /_stealthql/shares/acmeInvoicesForBob/export.csv?actor=bob
POST /_stealthql/shares/acmeInvoicesForBob/preview-import.csv
POST /_stealthql/shares/acmeInvoicesForBob/import.csv
POST /_stealthql/shares/acmeInvoicePageForBob/sign
GET /_stealthql/shares/acmeInvoicePageForBob/signatures/:signatureId
GET /_stealthql/proposals?actor=alice
POST /_stealthql/shares/acmeInvoicesForBob/proposals/:proposalId/accept
POST /_stealthql/shares/acmeInvoicesForBob/proposals/:proposalId/reject
npx stealthql share export acmeInvoicesForBob --actor bob --out invoices.csv
npx stealthql share preview-import acmeInvoicesForBob --actor bob --file invoices.csv
npx stealthql share import acmeInvoicesForBob --actor bob --file invoices.csv
npx stealthql share issue-token acmeInvoicesForBob --actor alice --principal bob --email [email protected]
npx stealthql share read acmeInvoicesForBob --token share_...
npx stealthql share export acmeInvoicesForBob --token share_... --out invoices.csvProposal responses use proposalId as the canonical identifier and include id as a UI-friendly alias.
Compliance Evidence Packs
Compliance mappings live in stealth.compliance.js / stealth.compliance.mjs. Use them to map tables, storage buckets, shares, functions, and data classes to evidence controls such as SOC 2 CC7, HIPAA Security Rule sections, and GDPR Articles 30/32.
export default {
dataClasses: {
financial: {
controls: ["soc2.CC7.2", "gdpr.Art30", "gdpr.Art32"],
aiReadable: false,
retention: "7y"
}
},
tables: {
invoices: {
dataClasses: ["financial"],
controls: ["soc2.CC7.2", "soc2.CC8.1", "gdpr.Art32"],
purpose: "Customer billing records"
}
}
};Generate an evidence bundle from the compiled capsule and hash-chained ledger:
npx stealthql audit-report --framework soc2-cc7 --out evidence/soc2-cc7
npx stealthql audit-report --framework hipaa-security --json
npx stealthql audit-report --framework gdpr --since 2026-01-01The command writes report.json, report.md, compliance-manifest.json, ledger-summary.json, control-evidence.json, and gaps.json when --out points to a directory. These are compliance-readiness evidence packs, not automatic certification or legal advice; manual evidence like training, vendor agreements, risk assessments, and auditor judgment still belongs outside the capsule.
Backend Snapshots
A StealthQL backend can be captured as a portable time-capsule artifact: capsule sources, compiled capsule, generated client files, and the hash-chained ledger prefix up to a chosen timestamp.
npx stealthql snapshot create --as-of 2026-05-01T14:00:00Z --out snapshots/bug-123.stealthql-snapshot.json.gz
npx stealthql snapshot inspect snapshots/bug-123.stealthql-snapshot.json.gz
npx stealthql snapshot restore snapshots/bug-123.stealthql-snapshot.json.gz --out tmp/bug-123-reproRestores place the ledger under .stealthql/events.ndjson for local runtime migration, plus a snapshot-restore.json receipt. V1 snapshots intentionally focus on the capsule plus ledger; storage blob inclusion is reserved for a later artifact format. The restored project can be materialized with npx stealthql dev.
Historical reads can query the ledger-backed state as of a timestamp:
npx stealthql query invoices --actor bob --as-of 2026-05-01T14:00:00Z --format tableThe V1 policy mode is current-policy-over-historical-data: StealthQL reconstructs historical rows from seeds plus mutation events, then applies the current capsule policies and field masks. Historical query results do not include mutation handles.
Local Event Inspection
GET /_stealthql/events is local-only and admin-only. In local mode it is an inspection surface over the raw ledger so admins can debug policy decisions, share access, SQL commands, mutations, storage events, side-effect ports, AI-port denials, and MCP activity. The ledger is not exposed as a normal SQL table such as _stealth_events; use the local events endpoint, npx stealthql audit, or typed audit/report commands instead. Ordinary stealth.query(...) reads are intentionally not logged as ledger events; use SQL/share/audit/mutation/function events for evidence. Production deployments should keep event inspection blocked or expose only tenant-filtered audit summaries.
Personal Beta Runbook
For personal self-hosted beta use, run the readiness gate:
npx stealthql deploy selfIt checks the capsule hash, durable PGlite materialization, ledger hash-chain integrity, bundled function integrity, the production actor boundary, private runtime data location, and .gitignore protections. It also warns about declared-but-not-yet-productionized subsystems like storage buckets and live side-effect ports.
For a fresh DigitalOcean Droplet, use the hardened Ubuntu bootstrap runbook:
docs/DIGITALOCEAN_DEPLOYMENT.md
scripts/digitalocean-ubuntu-setup.shThe Droplet script supports plan, deploy, and doctor modes. It keeps runtime state in /var/lib/stealthql/<app>, secrets in /etc/stealthql/<app>.env, source releases under /opt/stealthql/releases/<app>, and nginx as the only public entrypoint. Agents should fill in variables and run the script, not improvise root-level server setup.
For the common split where Vercel hosts the Next.js frontend and DigitalOcean hosts only the capsule runtime, use:
npx stealthql deploy split --frontend-url https://app.example.com --backend-url https://api.example.com --writeThat plan validates the generated /api/stealthql/* proxy, checks production auth origins, prints the Vercel env vars plus DigitalOcean bootstrap variables, and with --write creates copy-paste deployment files in the app repo. The full runbook is docs/VERCEL_DIGITALOCEAN_SPLIT.md.
For the simplest one-machine deployment, put the Next.js website and capsule runtime on the same Droplet:
npx stealthql deployChoose DigitalOcean only in the prompt, or run the direct command:
npx stealthql deploy single --url https://app.example.com --repo https://github.com/you/your-app.git --writeThe generated server command uses:
APP_REPO="https://github.com/you/your-app.git" \
APP_NAME="my-app" \
DOMAIN="app.example.com" \
LETSENCRYPT_EMAIL="[email protected]" \
DEPLOY_WEB=true \
RUN_SECURITY_TESTS=true \
/root/stealthql-do-setup.shIn that mode nginx proxies public traffic to Next.js on 127.0.0.1:3000, and the generated Next proxy calls StealthQL privately on 127.0.0.1:8787. See docs/DIGITALOCEAN_SINGLE_DROPLET.md.
Private runtime state does not live in the project by default. Local database state, auth emulator tokens, ledgers, and storage blobs are stored under the OS app-data directory, such as %LOCALAPPDATA%\StealthQL\projects\<project-key> on Windows. You can override this with STEALTHQL_DATA_DIR, but it must point outside the repo and outside deploy artifacts.
The repo should contain source capsule files like stealth.schema.mjs, stealth.auth.mjs, stealth.functions.mjs, stealth.storage.mjs, stealth.sync.mjs, stealth.shares.mjs, stealth.compliance.mjs, stealth.database.mjs, stealth.jobs.mjs, and stealth.policy.test.mjs. Generated/private artifact folders are ignored:
.stealthql/
stealthql-export/
backups/Backup and restore are explicit:
npx stealthql backup --out backups/my-app
npx stealthql ledger verify
npx stealthql restore --from backups/my-app --force
npx stealthql migrate preview
npx stealthql migrate applyShare and spreadsheet edits land in a proposal inbox. The dashboard shows pending proposals, and the API exposes accept/reject endpoints so CSV imports never mutate source data without approval. Public share portals should be opened with a share_token URL once; the runtime then redirects to a clean URL and uses a share-scoped HttpOnly cookie for proposal, export, and import requests. Hosted share pages should not post raw actor values.
Storage buckets are now policy-enforced locally:
npx stealthql storage put projectFiles --actor alice --key docs/a.txt --file a.txt --org-id org_acme
npx stealthql storage list projectFiles --actor alice
npx stealthql storage get projectFiles --actor alice --key docs/a.txt --out a.txtstorage get requires --out by default. Use --stdout only when intentionally piping raw bytes.
Production side-effect ports can run live by pointing them at explicit webhooks:
$env:STEALTHQL_EMAIL_WEBHOOK_URL="https://example.com/email"
$env:STEALTHQL_WEBHOOK_URL="https://example.com/webhook"
$env:STEALTHQL_PAYMENT_WEBHOOK_URL="https://example.com/payment"
$env:STEALTHQL_WEBHOOK_ALLOWED_HOSTS="example.com"Production outbound URLs are SSRF-guarded. Webhook ports require public HTTPS targets on port 443 and block loopback, private ranges, link-local metadata, URL credentials, non-HTTP schemes, IPv4 encoding tricks, IPv6-mapped private IPs, IDN/punycode hostnames, wildcard DNS helpers, and DNS rebinding. Optional host allowlists are available through STEALTHQL_EMAIL_WEBHOOK_ALLOWED_HOSTS, STEALTHQL_WEBHOOK_ALLOWED_HOSTS, and STEALTHQL_PAYMENT_WEBHOOK_ALLOWED_HOSTS. Private webhook targets are not allowed in production port delivery; use an explicit internal service path instead of relaxing SSRF checks. The Next proxy also validates server-only STEALTHQL_URL; it allows loopback for local/single-Droplet mode and public HTTPS for split deployments, while blocking metadata/private targets unless STEALTHQL_ALLOW_PRIVATE_PROXY_URLS=true is deliberately set.
Production Auth Profile
Auth assumptions are part of the capsule, not deployment folklore. Add or edit authConfig in stealth.auth.js:
authConfig: {
mode: "capsule-native",
sessions: {
cookieName: "stealth_session",
sameSite: "Lax",
secureCookies: true,
accessTokenMinutes: 60,
refreshTokenDays: 30,
refreshTokenRotation: true
},
csrf: {
requireOriginForUnsafeMethods: true
},
production: {
requireHttps: true,
allowedOrigins: ["https://your-app.example"]
},
oauth: {
allowedCallbackOrigins: ["https://your-app.example"]
}
}The compiler rejects unsafe profile values. npx stealthql deploy self blocks production readiness until secure cookies, same-origin CSRF gating, HTTPS requirements, refresh-token rotation, and explicit production origins are configured.
What Exists Now
This is not a hosted backend service yet. It is the runtime-system skeleton:
- A deterministic capsule compiler.
- A signed capsule hash.
- Bundled function source with live-source hash verification in local dev.
- A generated local client in
.stealthql/client.js. - An ownership export command that copies source definitions, capsule, generated client, local state, and ledger into a portable bundle.
- A local materializer that runs reads and writes through PGlite, with
local-state.jsonkept in the private runtime-data directory as an export/debug mirror. - Durable local PGlite state under the private runtime-data directory.
- Backup, restore, and ledger hash-chain verification commands.
- Migration preview/apply commands for additive schema changes, with destructive changes blocked unless forced.
- Policy-enforced local storage materialization with file blobs and metadata included in backups.
- Configurable production side-effect ports for email, SMS, webhooks, payments, and policy-guarded AI calls.
- A self-deploy readiness command for personal beta deployments.
- A local Auth Kernel with fake sessions, magic links, recovery codes, invites, test users, share tokens, bearer-token resolution, and auth audit events.
- A production-safe Actor Resolver boundary: non-local modes reject raw
actorrequest values unless a verified credential is provided. - A stress harness that runs auth, SQL reads, concurrent mutations, shares, HTTP routes, and ledger hash-chain verification in a temporary capsule copy.
- A one-command Next.js setup path that adds a SaaS database/auth blueprint,
.mjscapsule files, scripts, env defaults, a same-origin API proxy, a demo page, and a local database materialization. - A local database setup wizard at
/wizardfor template-based or custom table/column setup with secure generated auth, policies, seeds, and tests. - A machine-readable AI schema import workflow through
STEALTHQL_AGENT.md,stealth.schema.request.json, andstealthql schema apply --test. - Schema-declared indexes emitted into the local materializer.
- Schema-declared full-text search indexes emitted as PGlite/Postgres GIN indexes, exposed through
stealth.search(...)and/_stealthql/search. - Read policies compiled into SQL predicates instead of post-query JS filters.
- Prepared read plan caching and result caching with table invalidation.
- Shape-first reads through
/_stealthql/shape?shape=myProjects&actor=alice. - Ledger-backed table invalidation for cached query results.
- A share layer compiled into the capsule through
stealth.shares.js. - Local table shares as grids and row shares as record pages.
- Read-only and proposal share modes.
- Share field projection, explicit masks, expiration, runtime revocation, access audit, and proposal diff events.
csvRoundTrip: trueshares with CSV export/import for Google Sheets and Excel workflows, validated into proposal events.- Compliance evidence packs compiled from
stealth.compliance.*, the capsule, policy metadata, and the hash-chained ledger. - AI port enforcement through
ctx.ports.ai.complete/embed/classify, with provider-class placement controls, redacted extracts, and ledger-recorded blocked attempts. - A dashboard proposal inbox with approve/reject actions for shared and spreadsheet edits.
- Query profiles showing materializer, duration, rows scanned/returned, cache hit, plan hit, and index/search estimate.
- A constrained policy evaluator shared by reads, writes, functions, tests, sync declarations, and storage declarations.
- A SQL adapter for simple
SELECT,INSERT,UPDATE, andDELETEstatements over the same policy/mutation runtime. - A security test runner covering cross-tenant access, field write policies, SQL policy bypass attempts, Unicode/control-character fuzz cases, duplicate-key/duplicate-column cases, prototype pollution, dashboard XSS rendering, local CORS, body limits, and event log access.
- An append-only ledger in the private runtime-data directory.
- Ledger redaction driven by the data visibility contract.
- Simulated side-effect ports for email and webhooks.
- A local dashboard.
The current local materializer uses persistent PGlite as the query engine and mirrors state to JSON for export/debug outside the repo.
