eventdbx
v3.19.2
Published
Immutable, event-sourced, nosql, write-side database system.
Maintainers
Readme
EventDBX
You’ll appreciate this database system. EventDBX is extremely fast, and it lets you spend less time designing schemas and more time writing the code that drives your application.
Overview
EventDBX is an event-sourced, NoSQL write-side database designed to provide immutable, append-only storage for events across various domains. It is ideal for applications requiring detailed audit trails for compliance, complex business processes involving states, and high data integrity levels. The core engine focuses on the write side of CQRS—capturing and validating events, persisting aggregate state, and ensuring integrity with Merkle trees.
The companion plugin framework turns those events into durable jobs and delivers them to other services, letting external systems specialise in the read side. Whether you need to hydrate a search index, stream to analytics, feed caches, or trigger workflows, plugins let you extend EventDBX without altering the write path. Each plugin chooses the payload shape it needs (event-only, state-only, schema-only, or combinations) while the queue guarantees delivery and backoff.
Getting Started
Follow the steps below to spin up EventDBX locally. You can run every command with npx (no global install required); if you're contributing to the project, clone the repository and work from the repo root instead.
The CLI installs as dbx. Older releases exposed an eventdbx alias, but the primary command is now dbx.
Start the server
npm install eventdbx -g dbx start --foreground- Omit
--foregroundto daemonise the process. - Use
--data-dir <path>to override the default$HOME/.eventdbxdirectory. - Restriction (schema enforcement) defaults to
default; switch to--restrict=offfor permissive prototyping or--restrict=strictto require declared schemas on every write.
- Omit
Switch domains (optional)
dbx checkout -d herds- New installs default to the
defaultdomain, which stores data directly under the configureddata_dir. dbx checkout -d <domain>(ordbx checkout <domain>) changes the active domain and creates isolated storage in<data_dir>/domains/<domain>so you can group aggregates and plugins per bounded context.- This is useful when you're working on two different projects and need to switch between domain contexts.
- Domain names are case-insensitive, must begin with a letter or number, and may include dashes or underscores. Re-running the command for the current domain is a no-op.
- New installs default to the
Define a schema (recommended when running in restricted mode)
dbx schema create person \ --events person_created,person_updated \ --snapshot-threshold 100Omit
--snapshot-thresholdto inherit the default configured inconfig.toml(if any).EventDBX stores schemas in
~/.eventdbx/data/schemas.json(or thedata_diryou configured). Inspect the full definition at any time withdbx schema <aggregate>; the command prints the JSON the server enforces. For example, apersonaggregate that validates email format and enforces name lengths can be declared as:dbx schema create person --events person_created,person_updated dbx schema person{ "person": { "aggregate": "person", "snapshot_threshold": null, "locked": false, "field_locks": [], "hidden": false, "hidden_fields": [], "column_types": { "email": { "type": "text", "required": true, "format": "email" }, "name": { "type": "text", "rules": { "length": { "min": "1", "max": "64" } } } }, "events": { "person_created": { "fields": ["first_name", "last_name"] }, "person_updated": { "fields": [] } }, "created_at": "2025-10-26T22:25:24.028129Z", "updated_at": "2025-10-26T22:27:25.967615Z" } }column_typesdeclare data types, validation formats, and rule blocks per field. Rules can enforce length, numeric ranges, regexes, or cross-field matches.events.<event>.fieldsrestrict the properties an event may set; leaving the list empty keeps the event permissive.field_locksandhidden_fieldscontrol which fields can be updated or returned in aggregate detail calls.locked: truefreezes the schema to prevent further event writes until it is unlocked.
When you need hot reloads or historical context, snapshot schema changes per tenant:
dbx tenant schema publish <tenant> [--activate] [--reason <text>]captures the currentschemas.json, writes it toschemas/versions/<id>.json, and records metadata inschemas/schema_manifest.jsonunder the tenant’s data directory. (Omit<tenant>to target whichever domain is currently active.) Preferdbx schema publish …if you’re already in the schema workflow—the commands are identical and simply default the tenant from the active domain.dbx tenant schema history <tenant> [--json] [--audit]prints every recorded version plus the audit trail of publish/activate/rollback events.
dbx tenant schema diff <tenant> --from <version> --to <version> [--json] [--style patch|unified|split] [--color auto|always|never]emits either a JSON Patch, a GitHub-style unified diff, or a side-by-side split view. Use--color alwaysto force green additions / red removals (defaultautoenables color only when stdout is a TTY).dbx tenant schema activate|rollback <tenant> --version <id>advances or rewinds the active pointer. Include--no-reloadif the daemon is offline; otherwise the CLI tells the server to evict and reload that tenant’s schema cache immediately.dbx tenant schema reload <tenant>forces the running daemon to drop its cached schema/context for that tenant—useful after manual edits or when you disable automatic reloads.
Issue a token for CLI access
dbx token generate --group admin --user jane --expiration 3600- Add
--tenant <id>(repeat the flag for multiple tenants) to bind the token to specific tenant ids. The server rejects any request where the suppliedtenantIddoes not appear in the token’s claims. - Need the root bootstrap credential instead? Run
dbx token bootstrapto ensure~/.eventdbx/cli.tokenexists, or append--stdoutto print a one-off token without persisting it (pass--persistalongside--stdoutif you still want the file updated). Bootstrap tokens expire after two hours by default; supply--ttl <seconds>(for example--ttl 0to disable) if you must override the expiration.
- Add
Append an event
When the server is running the CLI proxies writes through the control socket automatically. Pass a token with
--tokenor setEVENTDBX_TOKENto reuse your own credentials; otherwise the CLI mints a short-lived token for the call.dbx aggregate apply person p-002 person_updated \ --field name="Jane Doe" \ --field status=activeIf the server is stopped, the same CLI command writes to the local RocksDB store directly.
Inspect recent history at any point with
dbx events --filter 'payload.status = "active"' --sort version:desc --take 5or drill into the payload of a specific event viadbx events <snowflake_id> --json.
Replicate a domain (optional)
dbx checkout -d remote1 --remote <host>:6363 --token "<remote_token>"dbx checkoutstores remote settings per domain (remote.jsonunder the domain data directory). You can re-run the command with just--remoteor--tokento rotate either value.When the remote hosts multiple tenants, pass
--remote-tenant <id>so subsequentdbx push/dbx pullcalls know which tenant to target.Push schemas first so the destination validates incoming events with the same rules:
dbx push schema remote1 dbx push schema remote1 --publish --publish-reason "rollout #42"Add
--publishto snapshot and activate the schemas on the remote immediately (with optional--publish-label,--publish-force, or--publish-no-reloadflags).Mirror domain data to the remote. Limit the sync to a specific aggregate type or identifier when you need a targeted replication:
dbx push remote1 dbx push remote1 --aggregate invoice dbx push remote1 --aggregate ledger --id ledger-001Pull data back down when the remote captured new events. The pull command performs the same integrity checks (version counts, Merkle root parity) before importing:
dbx pull remote1 dbx pull remote1 --aggregate invoice
Pushing or pulling aborts if either side has diverging history for an aggregate. Use the
--concurrency <threads>flag to tune throughput when replicating large domains.Automate replication with the built-in scheduler:
dbx watch remote1 --mode push --interval 120 --background dbx watch remote1 --mode bidirectional --aggregate ledger --run-once dbx watch remote1 --skip-if-active dbx watch status remote1watchloops forever (or until--run-once), triggering a push, pull, or bidirectional cycle every--intervalseconds. Pass--backgroundto daemonize,--skip-if-activeto avoid overlapping runs when another watcher is working on the same domain, and inspect persisted state at any time withdbx watch status <domain>(use--allfor a summary of every watcher).
Manage tenants (multi-tenant mode)
Enable multi-tenant mode in config.toml (set [tenants] multi_tenant = true) to hash tenants across shard directories. EventDBX automatically hashes tenants when no manual assignment exists, and you can override placements with the dbx tenant commands:
dbx tenant assign people --shard shard-0003
dbx tenant unassign sandbox
dbx tenant list --json
dbx tenant statsAssignments live in a dedicated RocksDB directory (tenant_meta/ under your data root). Unassigned tenants continue to hash across the configured shard count.
You now have a working EventDBX instance with an initial aggregate. Explore the Command-Line Reference for the full set of supported operations.
Run with Docker
Use the published image
# Pull the latest published release
docker pull thachp/eventdbx:tagname
# Start the daemon with persistent storage and required ports
docker run \
--name eventdbx \
--detach \
--publish 7070:7070 \
--publish 6363:6363 \
--volume "$PWD/data:/var/lib/eventdbx" \
thachp/eventdbx:tagnameConnect with TypeScript
// Use the JavaScript/TypeScript client (install via `pnpm add eventdbxjs`).
// Other languages can interact through plugins or HTTP bridges.
import { createClient } from 'eventdbxjs';
const client = createClient({
ip: process.env.EVENTDBX_HOST,
port: Number(process.env.EVENTDBX_PORT) || 6363,
token: process.env.EVENTDBX_TOKEN,
verbose: true, // should match verbose_responses = false on the server config file
});
await client.connect();
// create an aggregate
const snapshot = await client.create('person', 'p-110', 'person_registered', {
payload: { name: 'Jane Doe', status: 'active' },
metadata: { '@source': 'seed-script' },
note: 'seed aggregate',
});
console.log('created aggregate version', snapshot.version);
// apply an event
await client.apply('person', 'p-110', 'person_contact_added', {
payload: { name: 'Jane Doe', status: 'active' },
metadata: { note: 'seed data' },
});
// return a list of events of a p-110
const history = await client.events('person', 'p-110');
console.log('event count:', history.length);Features
- Flexible JSON payloads: Events accept arbitrary JSON payloads; scalar values are normalized into strings for state tracking, while structured objects remain fully queryable.
- Immutable Data Structure: Once data is entered into EventDBX, it becomes immutable, meaning it cannot be altered or deleted. This characteristic is crucial for applications where the accuracy and traceability of historical data are paramount, such as medical records, financial transactions, and supply chain management. Data can be archived, moving from short-term to long-term storage, but cannot be deleted.
- Event Sourcing and Replay: EventDBX is built on the principle of event sourcing, storing all changes to the data as a sequence of events. This allows for the complete replay of events to reconstruct the database's state at any point in time, thereby enhancing data recovery and audit capabilities. Unlike traditional databases that execute update statements to modify data, this system is event-driven. Aggregate state changes are defined in the event object, allowing these events to be replayed at any time to reconstruct the aggregate's current state.
- Merkle Tree Integration: Each aggregate in EventDBX is associated with a Merkle tree of events, enabling verification of data integrity. The Merkle tree structure ensures that any data tampering can be detected, offering an additional security layer against data corruption.
- Built-in Audit Trails: EventDBX automatically maintains a comprehensive audit trail of all transactions, a feature invaluable for meeting compliance and regulatory requirements. It provides transparent and tamper-evident records. During audits, administrators can issue specific tokens to auditors to access and review specific aggregate instances and all relevant events associated with those instances.
- Extensible Read Models via Plugins: Every event that lands in the write-side store can be dispatched to external systems through the plugin job queue. Configure plugins to receive only the slices they care about (event, state snapshot, schema), and let specialized services—search, analytics, personalization, alerting—build optimized read models without touching the write path.
- Remote Replication: Push or pull domain data between configured remotes with integrity checks on aggregate versions and Merkle roots, keeping replicas aligned without rewriting history. Automate recurring syncs with
dbx watch(looping scheduler with--interval,--run-once, and optional--background). - Built-in Observability: A Prometheus-compatible
/metricsendpoint reports HTTP traffic and plugin queue health so you can wire EventDBX into Grafana, Datadog, or any other monitoring stack out of the box. - Security with Token-Based Authorization: EventDBX implements token-based authorization to manage database access. Tokens are signed with an Ed25519 key pair stored under
[auth]inconfig.toml; keepprivate_keysecret and distributepublic_keyto services that need to validate them. This approach allows for precise control over who can access and modify data, protecting against unauthorized changes. - Encrypted Payloads & Secrets at Rest: Event payloads, aggregate snapshots, and
tokens.jsonare encrypted transparently when a DEK is configured. Metadata such as aggregate identifiers, versions, and Merkle roots remain readable so plugins and integrity checks keep working without additional configuration. - Schema validation: Enforce aggregate contracts with builtin formats (
email,url,wgs_84, and more), rule blocks for length, range, regex, required fields, and nestedproperties, plus strict/relaxed enforcement modes. - Powered by RocksDB and Rust: At its core, EventDBX utilizes RocksDB for storage, taking advantage of its high performance and efficiency. The system is developed in Rust, known for its safety, efficiency, and concurrency capabilities, ensuring that it is both rapid and dependable.
Restriction Modes
EventDBX can run in three validation modes, tuned for different phases of development:
- Off (
--restrict=offor--restrict=false): Ideal for prototyping and rapid application development. Event payloads bypass schema validation entirely, letting you iterate quickly without pre-registering aggregates, tables, or column types. - Default (
--restrict=defaultor--restrict=true): Validates events whenever a schema exists but allows aggregates without a declared schema. This matches prior behaviour and suits teams rolling out schema enforcement incrementally. - Strict (
--restrict=strict): Requires every aggregate to have a schema before events can be appended. Missing schemas fail fast with a clear error so production environments stay aligned with their contracts.
Command-Line Reference
EventDBX ships a single dbx binary. Every command accepts an optional --config <path> to point at an alternate configuration file.
Server lifecycle
dbx start [--port <u16>] [--data-dir <path>] [--foreground] [--restrict[=<off|default|strict>]]
Launches the server. The default mode validates when schemas exist; set--restrict=offto bypass validation or--restrict=strictto require schemas up front.dbx stop
Stops the running daemon referenced by the PID file.dbx status
Prints the current port, PID, uptime, and whether restriction is enabled.dbx restart [start options…]
Stops the existing daemon (if any) and restarts it with the provided options.dbx destroy [--yes]
Removes the PID file, data directory, and configuration file after confirmation (or immediately with--yes).dbx checkout -d <domain>
Switches the active domain context for the CLI. Uses thedefaultdomain by default and creates per-domain data roots under<data_dir>/domains/<domain>.dbx merge --from <domain> [--into <domain>] [--overwrite-schemas]
Copies events (and their derived state) plus schema definitions from the source domain into the target domain (defaults to the active domain). Conflicting schemas require--overwrite-schemas; existing aggregates in the target abort the merge to prevent data loss.
Configuration
dbx config [--port <u16>] [--data-dir <path>] [--cache-threshold <usize>] [--dek <base64>] [--list-page-size <usize>] [--page-limit <usize>] [--plugin-max-attempts <u32>] [--snapshot-threshold <u64>] [--clear-snapshot-threshold]
Persists configuration updates. Run without flags to print the current settings. The first invocation must include a data encryption key via--dek(32 bytes of base64).--list-page-sizesets the default page size for aggregate listings (default 10),--page-limitcaps any requested page size across list and event endpoints (default 1000), and--plugin-max-attemptscontrols how many retries are attempted before an event is marked dead (default 10).
Tokens
dbx token generate --group <name> --user <name> [--expiration <secs>] [--limit <writes>] [--keep-alive]
Issues a new token tied to a Unix-style group and user.dbx token list
Lists all tokens with status, expiry, and remaining writes.dbx token revoke --token <value>
Revokes a token immediately.dbx token refresh --token <value> [--expiration <secs>] [--limit <writes>]
Extends the lifetime or write allowance of an existing token.
Schemas
dbx schema create <name> --events <event1,event2,...> [--snapshot-threshold <u64>]dbx schema add <name> --events <event1,event2,...>dbx schema remove <name> <event>dbx schema annotate <name> <event> [--note <text>] [--clear]dbx schema list
Schemas are stored on disk; when restriction is default or strict, incoming events must satisfy the recorded schema (and strict additionally requires every aggregate to declare one).
Aggregates
dbx aggregate create --aggregate <type> --aggregate-id <id> --event <name> [--field KEY=VALUE...] [--payload <json>] [--metadata <json>] [--note <text>] [--token <value>] [--json]dbx aggregate apply --aggregate <type> --aggregate-id <id> --event <name> --field KEY=VALUE... [--payload <json>] [--stage] [--token <value>] [--note <text>]
Appends an event immediately—use--stageto queue it for a later commit.dbx aggregate patch --aggregate <type> --aggregate-id <id> --event <name> --patch <json> [--stage] [--token <value>] [--metadata <json>] [--note <text>]
Applies an RFC 6902 JSON Patch to the current state and persists the delta as a new event.dbx aggregate list [--cursor <token>] [--take <n>] [--stage]
Lists aggregates with version, Merkle root, and archive status; pass--stageto display queued events instead.dbx aggregate get --aggregate <type> --aggregate-id <id> [--version <u64>] [--include-events]dbx aggregate verify --aggregate <type> --aggregate-id <id>dbx aggregate snapshot --aggregate <type> --aggregate-id <id> [--comment <text>]dbx aggregate archive --aggregate <type> --aggregate-id <id> [--comment <text>]dbx aggregate restore --aggregate <type> --aggregate-id <id> [--comment <text>]dbx aggregate remove --aggregate <type> --aggregate-id <id>Removes an aggregate that has no events (version still 0).dbx aggregate commit
Flushes all staged events in a single atomic transaction.
Aggregate sorting currently accepts aggregate_type, aggregate_id, archived, created_at, and updated_at fields (with optional :asc/:desc suffixes). Sorting by version has been removed to keep the CLI aligned with the indexed columns.
Cursor pagination tokens are human-readable: active aggregates encode as a:<aggregate_type>:<aggregate_id> (r: for archived), and event cursors append the event version (a:<aggregate_type>:<aggregate_id>:<version>). Grab the last row from a page, form its token, and pass it via --cursor to resume listing.
Timestamp-sorted aggregate listings use ts:<field>:<order>:<scope>:<timestamp_ms>:<aggregate_type>:<aggregate_id> tokens (field is created_at or updated_at, order is asc|desc, and scope is a or r). Take the timestamp from the last row (in milliseconds) to build the cursor for the next page; you can also pass the shorthand ts:<aggregate_type>:<aggregate_id> and the CLI/control clients (including eventdbxjs) will derive the rest when combined with --sort created_at|updated_at.
Events
dbx events [--aggregate <type>] [--aggregate-id <id>] [--cursor <token>] [--take <n>] [--filter <expr>] [--sort <field[:order],...>] [--json] [--include-archived|--archived-only]
Streams events with optional aggregate scoping, SQL-like filters (e.g.payload.status = "open" AND metadata.note LIKE "retry%"), and multi-key sorting. Prefix fields withpayload.,metadata., orextensions.to target nested JSON;created_at,event_id,version, and other top-level keys are also available.dbx events --event <snowflake_id> [--json]
Displays a single event by Snowflake identifier, including payload, metadata, and extensions. You can also omit--eventwhen the first positional argument is a valid Snowflake id.dbx aggregate export [<type>] [--all] --output <path> [--format csv|json] [--zip] [--pretty]
Writes the current aggregate state (no metadata) as CSV or JSON. Exports default to one file per aggregate type; pass--zipto bundle the output into an archive.
--payload lets you provide the full event document explicitly; use dbx aggregate patch when you need to apply an RFC 6902 JSON Patch server-side.
Aggregate Operation Costs
Every aggregate command ultimately turns into a small set of RocksDB reads or writes. Ballpark complexities (assume hot data in cache):
aggregate list: iterates over the requested page of aggregates. Runtime is O(k) in the page size (defaultlist_page_size) when you follow the natural key order (type → id) or stick to single-field timestamp sorts; arbitrary filters or multi-field sorts fall back to an O(N) scan because predicates are evaluated in memory after loading each aggregate.aggregate get: one state lookup (O(log N)) plus optional event scanning when you request--include-eventsor--version(adds O(Eₐ) for that aggregate’s events) and lightweight JSON parsing of the returned map.aggregate select: uses the same state lookup asget(O(log N)) and walks each requested dot path in-memory; no additional RocksDB reads are taken, so cost is dominated by the selected payload size.aggregate apply: validates the payload, merges it into the materialized state, and appends the event in a single batch write. Time is proportional to the payload size being processed.aggregate patch: reads the current state (same cost asget), applies the JSON Patch document, then appends the result—effectively O(payload + patch_ops).aggregate verify: recomputes the Merkle root for the aggregate’s events (O(Eₐ)).
In practice those costs are dominated by payload size and the number of events you ask the CLI to stream; hot aggregates tend to stay in the RocksDB block cache, keeping per-operation latency close to constant.
| Operation | Time complexity | Notes |
| ------------------ | ------------------------ | ---------------------------------------------------------------- |
| aggregate list | O(k) / O(N) | O(k) for cursor-order or timestamp-indexed sorts; O(N) when filters/sorts can’t use an index. |
| aggregate get | O(log N + Eₐ + P) | Single state read plus optional event scan and JSON parsing. |
| aggregate select | O(log N + P_selected) | Same state read as get; dot-path traversal happens in memory. |
| aggregate apply | O(P) | Payload validation + merge + append in one RocksDB batch. |
| aggregate patch | O(log N + P + patch_ops) | Reads state, applies JSON Patch, then appends the patch payload. |
| aggregate verify | O(Eₐ) | Recomputes the Merkle root across the aggregate’s events. |
Staged events are stored in .eventdbx/staged_events.json. Use aggregate apply --stage to add entries to this queue, inspect them with aggregate list --stage, and persist the entire batch with aggregate commit. Events are validated against the active schema whenever restriction is default or strict; the strict mode also insists that a schema exists before anything can be staged. The commit operation writes every pending event in one RocksDB batch, guaranteeing all-or-nothing persistence.
Plugins
dbx plugin install <plugin> <version> --source <path|url> [--bin <file>] [--checksum <sha256>] [--force]dbx plugin config tcp --name <label> --host <hostname> --port <u16> [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin config http --name <label> --endpoint <host|url> [--https] [--header KEY=VALUE]... [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin config log --name <label> --level <trace|debug|info|warn|error> [--template "text with {aggregate} {event} {id}"] [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin config process --name <instance> --plugin <id> --version <semver> [--arg <value>]... [--env KEY=VALUE]... [--working-dir <path>] [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin config capnp --name <label> --host <hostname> --port <u16> [--payload <all|event-only|state-only|schema-only|event-and-schema>] [--disable]dbx plugin enable <label>dbx plugin disable <label>dbx plugin remove <label>dbx plugin testdbx plugin listdbx queuedbx queue cleardbx queue retry [--event-id <job-id>]dbx plugin replay <plugin-name> <aggregate> [<aggregate_id>] [--payload-mode <all|event-only|state-only|schema-only|event-and-schema|extensions-only>]
Plugins consume jobs from a durable RocksDB-backed queue. EventDBX enqueues a job for every aggregate mutation, and each plugin can opt into the data it needs—event payloads, materialized state, schemas, or combinations thereof. Clearing dead entries prompts for confirmation to avoid accidental removal. Manual retries run the failed jobs immediately; use --event-id to target a specific entry.
Upgrades
dbx upgrade [<version>|latest] [--no-switch] [--print-only]
Downloads the selected release into~/.eventdbx/versions/<target>/<tag>/and switches the activedbxbinary unless--no-switchis supplied. The CLI looks up releases from GitHub, so you can uselatestor supply a tag likev1.13.2(omitting the leadingvalso works). Versions lower thanv1.13.2are rejected because upgrades are unsupported before that release. Use--print-onlyto preview the download and activation steps without performing them. Shortcut syntaxdbx upgrade@<version>resolves through the same lookup.dbx upgrade use <version> [--print-only]
Switches to an installed release without re-downloading it so you can hop between versions likenvm. The command references the locally cached binary; on Windows the switch must still be completed manually by replacing the executable.dbx upgrade installed [--json]
Lists every cached CLI release and marks the one currently active. Use--jsonfor machine-readable output that includes the detected target triple.dbx upgrade --suppress <version>
Suppresses the upgrade reminder for the provided release tag. Combine it withlatestto ignore the current release until a newer one ships. Usedbx upgrade --clear-suppressto re-enable reminders for all releases.dbx upgrade list [--limit <n>] [--json]
Queries the GitHub releases API and prints the most recent versions. Installed releases are annotated with[installed]and the active release with[active]. The default limit is 20; use--jsonfor a machine-readable list, and calldbx upgrade@listfor the same shortcut.
The CLI checks for new releases on startup and prints a reminder when a newer version is available. Set
DBX_NO_UPGRADE_CHECK=1to bypass the check for automation scenarios.
Maintenance
dbx backup --output <path> [--force]
Creates a compressed archive with the entire EventDBX data directory and configuration. Stop the server before running a backup to avoid partial snapshots.dbx restore --input <path> [--data-dir <path>] [--force]
Restores data from a backup archive. Use--data-dirto override the stored location, and--forceto overwrite non-empty destinations. The server must be stopped before restoring.
Plugins fire after every committed event to keep external systems in sync. Remaining plugins deliver events through different channels:
- TCP – Writes a single-line JSON
EventRecordto the configured socket. - HTTP – POSTs the
EventRecordJSON to an endpoint with optional headers; add--httpsduring configuration to force HTTPS when the endpoint lacks a scheme. - Log – Emits a formatted line via
tracingat the configured level. By default:aggregate=<type> id=<id> event=<event>. - Process – Launches an installed plugin binary as a supervised subprocess and streams events to it over Cap'n Proto. Use this for first-party plugins from the
dbx-pluginsworkspace or your own extensions.
Process plugins are distributed as zip/tar bundles. Install them with dbx plugin install <plugin> <version> --source <path-or-url>—the bundle is unpacked to ~/.eventdbx/plugins/<plugin>/<version>/<target>/, where <target> matches the current OS/architecture (for example, x86_64-apple-darwin). Official bundles live in the dbx-plugins releases; pass the asset URL to --source or point it at a local build while developing. After installation, bind the binary to an instance:
dbx plugin config process \
--name search \
--plugin dbx_search \
--version 1.0.0 \
--arg "--listen=0.0.0.0:8081" \
--env SEARCH_CLUSTER=https://example.com
dbx plugin enable searchEventDBX supervises the subprocess, restarts it on failure, and delivers every EventRecord/aggregate snapshot over the stream. Plugins that run as independent services can continue to use the TCP/HTTP emitters instead.
Failed deliveries are automatically queued and retried with exponential backoff. The server keeps attempting until the plugin succeeds or the aggregate is removed, ensuring transient outages do not drop notifications. Use dbx queue to inspect pending/dead event IDs.
Plugin configurations are stored in .eventdbx/plugins.json. Each plugin instance requires a unique --name so you can update, enable, disable, remove, or replay it later. plugin enable validates connectivity (creating directories, touching files, or checking network access) before marking the plugin active. Remove a plugin only after disabling it with plugin disable <name>. plugin replay resends stored events for a single aggregate instance—or every instance of a type—through the selected plugin; include --payload-mode to temporarily override the configured payload shape for the replay.
Need point-in-time snapshots instead of streaming plugins? Use dbx aggregate export to capture aggregate state as CSV or JSON on demand.
Example HTTP/TCP payload (EventRecord):
{
"aggregate_type": "patient",
"aggregate_id": "p-001",
"event_type": "patient-updated",
"payload": {
"status": "inactive",
"comment": "Archived via API"
},
"extensions": {
"@analytics": {
"correlation_id": "rest-1234"
}
},
"metadata": {
"event_id": "1234567890123",
"created_at": "2024-12-01T17:22:43.512345Z",
"issued_by": {
"group": "admin",
"user": "jane"
}
},
"version": 5,
"hash": "cafe…",
"merkle_root": "deadbeef…"
}Heads-up for plugin authors
The dbx-plugins surfaces now receiveevent_idvalues as Snowflake strings and may optionally see anextensionsobject alongsidepayload. Update custom handlers to treatmetadata.event_idas a stringified Snowflake and to ignore or consume the newextensionsenvelope as needed.
Column definitions
Each entry in the column_types map declares both a storage type and the rules EventDBX enforces. Supported types:
| Type | Accepted input | Notes |
| -------------------------- | -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- |
| integer | JSON numbers or strings that parse to a signed 64-bit integer | Rejects values outside the i64 range. |
| float | JSON numbers or numeric strings | Stored as f64; scientific notation is accepted. |
| decimal(precision,scale) | JSON numbers or strings | Enforces total digits ≤ precision and fractional digits ≤ scale. |
| boolean | JSON booleans, 0 / 1, or "true", "false", "1", "0" | Values are normalised to true / false. |
| text | UTF-8 strings | Use length, contains, or regex rules for additional constraints. |
| timestamp | RFC 3339 timestamps as strings | Normalised to UTC. |
| date | YYYY-MM-DD strings | Parsed as a calendar date without a timezone. |
| json | Any JSON value | No per-field validation is applied; use when you want to store free-form payloads. |
| binary | Base64-encoded strings | Decoded to raw bytes before validation; length counts bytes after decoding. |
| object | JSON objects | Enable nested validation via the properties rule (see below). Extra keys not listed remain untouched. |
Rules are optional and can be combined when the target type supports them:
required: the field must be present in every event payload.contains/does_not_contain: case-sensitive substring checks fortextfields.regex: one or more regular expressions thattextfields must satisfy.format: built-in string validators; chooseemail,url,credit_card,country_code(ISO 3166-1 alpha-2),iso_8601(RFC 3339 timestamp),wgs_84(latitude/longitude in decimal degrees),camel_case,snake_case,kebab_case,pascal_case, orupper_case_snake_case.length:{ "min": <usize>, "max": <usize> }bounds the length oftext(characters) orbinary(decoded bytes).range:{ "min": <value>, "max": <value> }for numeric and temporal types (integer,float,decimal,timestamp,date). Boundary values must parse to the column’s type.properties: nestedcolumn_typesdefinitions forobjectcolumns, enabling recursion with the same rule set as top-level fields.
Use dbx schema field <aggregate> <field> … to manage types and rules without editing schemas.json. It can set --type <text|integer|…>, toggle --required/--not-required, enforce --format <email|url|…>, swap --regex / --contains lists, adjust --length-min / --length-max, feed JSON rule blocks via --rules @rules.json or --properties @object_rules.json, and clear definitions (--clear-type, --clear-rules, --clear-format, etc.). Pair it with dbx schema alter <aggregate> <event> to append/remove event field allow-lists or replace them entirely via --set/--clear.
dbx schema field person email --type text --format email --required
dbx schema field person status --regex '^(pending|active|blocked)$'
dbx schema field person profile --type object --properties @profile_rules.json
dbx schema alter person person_created --add first_name,last_name
dbx schema alter person person_updated --set status,comment
dbx schema alter person person_updated --clearPerformance Testing
Performance benchmarks and workload scenarios live in the eventdbx-perf repository. Clone that project to review the current load profiles, adjust parameters, or submit new variations as you evaluate throughput and latency trade-offs.
All tests executed on the same host using Docker-based databases, single-thread client, datasets up to 10 M records. Latency ≈ mean operation time; throughput = operations per second (converted from ns → µs).
| Engine | Typical Throughput (ops/s) | Typical Latency (µs) | Scaling Trend | Summary & Observation | | :------------------ | :------------------------: | :------------------: | :------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | EventDBX | 1 400 – 2 000 | 0.5 – 0.8 | Flat (1 K → 10 M) | RocksDB-based append-only core with hot-aggregate caching keeps performance nearly constant. Excels at write-heavy, event-sourced workloads with verifiable integrity. | | PostgreSQL 15 | 1 000 – 1 900 | 0.6 – 1.0 | Stable / slightly variable | Strong transactional baseline; planner + WAL sync add moderate overhead. Excellent for mixed OLTP queries but heavier per-event cost. | | MongoDB 7 | 400 – 1 000 | 1.5 – 2.5 | Gradual decline with size | Flexible JSON-document store with moderate efficiency. Serialization and journaling add overhead; roughly half of EventDBX throughput. | | SQL Server 2022 | 50 – 180 | 5 – 20 | Drops quickly > 100 K | High latency and lowest throughput; B-tree structure and lock coordination dominate under load. Best suited for transactional consistency, not high-velocity writes. |
EventDBX delivers the highest sustained throughput and lowest latency, outperforming PostgreSQL by ~2×, MongoDB by ~3×, and SQL Server by >10×—while maintaining sub-microsecond response times even at multi-million-record scale.
Contributing
Quick Start
- Report Issues: Found a bug or have a suggestion? Open an issue with detailed information.
- Contribute Code:
- Fork & Clone: Fork the EventDBX repo and clone your fork.
- Branch: Create a branch for your changes from
develop. - Develop: Make your changes, adhering to our coding standards. Add or update tests as necessary.
- Commit: Use Conventional Commits for clear, structured commit messages (e.g.,
feat: add new featureorfix: correct a bug). - Pull Request: Submit a pull request (PR) against the
developbranch of the original repo. Describe your changes and link to any related issues.
Guidelines
- Formatting: A project-wide Prettier configuration lives at
.prettierrc.json; use it for Markdown/JSON/YAML changes. - Commit linting: Conventional Commit headers are enforced through
.commitlintrc.json. - Code Review: Your PR will be reviewed by the team. Be open to feedback and make requested adjustments.
- Merge: Once approved, your PR will be merged into the project, and you'll be credited as a contributor.
License
© 2025 Patrick Thach and contributors
EventDBX is open source software licensed under the MIT License.
See the LICENSE file in the repository for details.
