@wireio/test-cluster-tool
v0.1.10
Published
Core library and CLI for creating, running, and tearing down multi-chain WIRE test clusters. Ships the `wire-test-cluster` binary, process managers for every cluster component, and typed clients for WIRE / Ethereum / Solana.
Readme
@wireio/test-cluster-tool
Core library and CLI for creating, running, and tearing down multi-chain WIRE test clusters. Ships the wire-test-cluster binary, process managers for every cluster component, and typed clients for WIRE / Ethereum / Solana.
- Binary:
wire-test-cluster - Stack: Node ≥22,
child_process.spawn+tree-kill(no pm2),ethers,@solana/web3.js,@coral-xyz/anchor - Companion UI:
@wireio/debugging-client-tool-tui(wire-debugging-client-tool-tuibin) — non-destructive live debugger, see "Debugging a running cluster" below.
Overview
A "cluster" is an on-disk directory plus the long-running processes that operate on it. wire-test-cluster owns the full lifecycle:
| Command | What it does |
|---|---|
| create | Build the directory layout, generate keys + genesis + configs, bootstrap every chain (WIRE system contracts, OPP contracts on Anvil, Anchor program on Solana), persist cluster-state.json, then exit. |
| run | Reload cluster-state.json, relaunch every managed process from its saved launch command, expose all endpoints, block until Ctrl+C. |
| destroy | Stop every process, then remove the cluster directory. |
The cluster directory is the single source of truth: executable paths, ports, key material, node layout, and deployed contract addresses all live there. Subsequent runs never re-resolve — they just replay.
What gets spawned
Per cluster:
kiod— WIRE wallet daemon- One
nodeopper producer / bios / batch-operator / underwriter node anvil(optional — enabled by--ethereum-path)solana-test-validator(optional — enabled by--solana-path)- An embedded debugging HTTP server (Express + JSON-RPC 2.0) that persists OPP envelopes under
<cluster-path>/data/opp-debugging/. Runs in-process — no separate binary.
Every spawned process writes a pid file (<dataPath>/<label>.pid) and a rotating daily log (<dataPath>/logs/log_YYYYMMDD.log) — the layout the TUI consumes.
Install & Build
From the repo root:
pnpm install
pnpm --filter @wireio/test-cluster-tool buildAfter this the wire-test-cluster bin is linked into node_modules/.bin/. Invoke via pnpm exec wire-test-cluster … or the workspace-wide pnpm wire-test-cluster ….
CLI reference
wire-test-cluster [global-options] <command> [command-options]Global options
| Flag | Type | Default | Notes |
|---|---|---|---|
| -d, --cluster-path <path> | string | required | Absolute directory for cluster data. Created if missing. |
| --force | boolean | false | When combined with create, remove the existing directory first. |
create options
| Flag | Default | Notes |
|---|---|---|
| --build-path <path> | required | Path to the wire-sysio build directory (contains bin/nodeop, bin/kiod, etc.). |
| -p, --pnodes <n> | 1 | Producer nodes. |
| -n, --nodes <n> | 0 | Additional non-producer nodes. |
| --prod-count <n> | 21 | Producers to register on-chain. |
| -s, --topology <mesh\|ring\|star> | mesh | P2P topology between nodes. |
| --http-secure | false | Use HTTPS for node RPC endpoints. |
| -b, --batch-operator-count <n> | 3 | Batch-operator nodes (range 3–21). |
| -u, --underwriter-count <n> | 1 | Underwriter nodes (range 1–100). |
| --epoch-duration-sec <n> | 360 | Seconds per epoch. |
| --warmup-epochs <n> | 1 | Epochs before an operator transitions WARMUP → ACTIVE. |
| --cooldown-epochs <n> | 1 | Epochs before an operator can deregister after COOLDOWN. |
| --ethereum-path <path> | (omitted) | Path to the wire-ethereum repo. Enables Anvil + OPP-contract deployment. |
| --solana-path <path> | (omitted) | Path to the wire-solana repo. Enables solana-test-validator + Anchor program deploy. |
run / destroy options
Both commands take no additional flags — they operate on the directory supplied via --cluster-path.
Usage examples
Minimal single-chain cluster (WIRE only)
Bootstrap one producer node with 21 registered producers, no outposts:
wire-test-cluster \
--cluster-path /data/opt/wire/chains/dev-001 \
--force \
create \
--build-path /data/shared/code/wire/wire-sysio/buildThen launch it:
wire-test-cluster --cluster-path /data/opt/wire/chains/dev-001 runCtrl+C triggers the registered SIGINT handler, which calls ClusterManager.stop() → ProcessManager.killAll() → embedded debugging server shutdown.
Full three-chain cluster (WIRE + ETH + SOL)
With default counts (1 producer / 3 batch operators / 1 underwriter) and both outposts:
wire-test-cluster \
--cluster-path /data/opt/wire/chains/dev-full \
--force \
create \
--build-path /data/shared/code/wire/wire-sysio/build \
--ethereum-path /data/shared/code/wire/wire-ethereum \
--solana-path /data/shared/code/wire/wire-solana \
--batch-operator-count 3 \
--underwriter-count 1 \
--epoch-duration-sec 60Run:
wire-test-cluster --cluster-path /data/opt/wire/chains/dev-full runThe create step performs, in order:
- Directory prep + port resolution (persisted into
cluster-config.json). - Bios + producer node spin-up; WIRE system contract deployment.
- OPP contract deployment on Anvil (if
--ethereum-path). solana-test-validatorlaunch +opp-outpostprogram init (if--solana-path).- Batch-operator + underwriter node spin-up with outpost client args injected.
- Cross-chain handshake +
cluster-state.jsonwrite. - Clean shutdown of every spawned process — the cluster is ready to
run.
Dense cluster for stress testing
wire-test-cluster \
--cluster-path /data/opt/wire/chains/stress-01 \
--force \
create \
--build-path /data/shared/code/wire/wire-sysio/build \
-p 3 -n 2 \
--prod-count 21 \
-b 21 \
-u 25 \
--topology mesh \
--epoch-duration-sec 30Running existing clusters
Already created, just want to start it:
wire-test-cluster --cluster-path /data/opt/wire/chains/dev-full runTear it down and reclaim disk:
wire-test-cluster --cluster-path /data/opt/wire/chains/dev-full destroyDebugging a running cluster
Once wire-test-cluster … run is live in one terminal, use the sibling wire-debugging-client-tool-tui TUI in a second terminal to observe it. The TUI reads the same on-disk layout — cluster-config.json, cluster-state.json, per-process pid files, per-process logs, and OPP envelopes under data/opp-debugging/ — so there is zero extra setup.
# Terminal 1: run the cluster
wire-test-cluster --cluster-path /data/opt/wire/chains/dev-full run
# Terminal 2: watch it live
wire-debugging-client-tool-tui --cluster-path /data/opt/wire/chains/dev-fullOr just cd into the cluster directory first — wire-debugging-client-tool-tui defaults --cluster-path to process.cwd():
cd /data/opt/wire/chains/dev-full
wire-debugging-client-tool-tuiWhat the TUI surfaces
- Process Monitor panel — every pid-file-backed process (WIRE producers / bios / batch operators / underwriters, plus
anvilandsolana-test-validatorwhen present) with a liveness glyph (● alive / ✕ dead / … unknown) refreshed every 5 seconds. Arrow through the list withj/k; pressEnterto open that process's log in the Log Viewer. - Log Viewer panel — virtual-scrolled reader for today's
log_YYYYMMDD.logof the selected process.↑/↓/PgUp/PgDnscroll,g/Gjump to top/bottom,Ftoggles follow mode (auto-pin to tail). Rotation (inode change) is detected automatically and the index rebuilds. - OPP Epoch Tracker panel — live envelope counts per
DebugOutpostEndpointsTypeslot for the most recent epoch, plus total cached-epoch depth (bounded LRU at 1000). Populated by watchingdata/opp-debugging/for the debugging server's envelope writes. - Status bar —
nodes: ALIVE/TOTALbadge andepoch: <current>badge.
Focused debugging
Activate only the OPP feature (Process Monitor is required and stays on regardless):
wire-debugging-client-tool-tui -c /data/opt/wire/chains/dev-full --features=oppCrank log verbosity and tail the TUI's own log in a third terminal:
wire-debugging-client-tool-tui -c /data/opt/wire/chains/dev-full --log-level=trace &
tail -f /data/opt/wire/chains/dev-full/data/tui/logs/tui.logThe TUI writes file-only (no console output — Ink would corrupt), so tui.log is the canonical place to investigate TUI behavior.
See packages/debugging-client-tool-tui/README.md for the full feature breakdown and keybinding reference.
Cluster directory layout
After create completes:
<cluster-path>/
├── cluster-config.json # ports, paths, binary locations (immutable after create)
├── cluster-state.json # node inventory + launch commands (rewritten each create)
└── data/
├── node_bios/ # bios node data dir (blocks/, state/, logs/, *.pid)
├── node_00/ … node_NN/ # producer nodes
├── node_batchop_00/ … # batch-operator nodes
├── node_uwrit_00/ … # underwriter nodes
├── anvil/ # anvil state + pid + logs (if --ethereum-path)
├── solana_validator/ # solana-test-validator ledger + pid + logs (if --solana-path)
├── eth-abis/ # deployed contract ABIs (OPP, OPPInbound, BAR)
├── solana-idls/ # Anchor IDLs (opp_outpost.json)
├── opp-debugging/ # envelope .data/.metadata pairs (debugging server writes)
└── tui/logs/ # wire-debugging-client-tool-tui writes here (see companion)Every managed process writes <dataPath>/<label>.pid (e.g. data/node_00/node-00.pid) on spawn and removes it on clean exit. The TUI's Process Monitor iterates these.
Programmatic usage
For flow tests and custom tooling, the harness exports ClusterManager, the process managers, the chain clients, and the typed configuration:
import {
ClusterManager,
ClusterPorts,
ProcessManager,
WIREClient,
ETHClient,
SOLClient,
Clio
} from "@wireio/test-cluster-tool"
const config = await ClusterManager.resolveExePaths("/path/to/wire-sysio/build")
// …build ClusterConfig, then:
const manager = new ClusterManager(clusterConfig).loadState()
await manager.startAndWait()See the flow-a / flow-b / flow-c / flow-d packages for end-to-end test examples that drive full scenarios against a harness-built cluster.
Development
# Incremental type-check
pnpm --filter @wireio/test-cluster-tool compile:watch
# Unit tests
pnpm --filter @wireio/test-cluster-tool test
# Prettier
pnpm --filter @wireio/test-cluster-tool formatAny new or modified function / class / module ships with unit tests in the same commit — see CLAUDE.md "Unit tests are mandatory for every new or modified symbol".
Troubleshooting
createhangs on "waiting for node_00 to sync" — most commonly a stalenodeopprocess from a prior run.ProcessManagerpkills known binaries on its own initialization, but a fresh shell may not see leftover children. Check withpgrep -a nodeopand clean up manually if needed.runfails with "port N in use" — a previous cluster on the same machine wasn't fully destroyed.destroycallskillAll+rm -rf, but you can force-clean withpkill nodeop; pkill kiod; pkill anvil; pkill solana-test-validatorand thenrm -rf <cluster-path>.--ethereum-pathbootstrap errors out on missing artifacts — runpnpm --filter @wireio/wire-ethereum buildin thewire-ethereumrepo first so the OPP contract artifacts exist.--solana-pathbootstrap can't findopp_outpost.so— build the Anchor program first (anchor build) in thewire-solanarepo. The harness copies the.so+ IDL into<cluster-path>/data/solana-idls/at bootstrap time.- Can't tell what's dead in a running cluster — launch the TUI (see "Debugging a running cluster"). The Process Monitor panel flags every dead pid within 5 seconds.
Related packages
@wireio/debugging-client-tool-tui— live debugging UI for the cluster this harness builds.cdinto any cluster directory and runwire-debugging-client-tool-tui.@wireio/debugging-server— HTTP / JSON-RPC 2.0 server embedded insideClusterManagerthat persists OPP envelopes.@wireio/debugging-shared— shared types (ports, cluster config/state, endpoint enums) consumed by both the harness and its clients.@wireio/flow-{a,b,c,d}— end-to-end test flows that drive the harness-built cluster through specific scenarios.
