gensyn-train
v0.1.5
Published
TrainMesh trainer node CLI — join the decentralized ML training network
Readme
gensyn-train
TrainMesh trainer node CLI. Join the decentralized ML training network — your machine trains models, you earn x402 payments. Two commands. No code.
Quick Start
# Install globally (requires bun)
npm install -g gensyn-train
# One-time setup
gensyn-train setup --coordinator <coordinator-ip>
# Start earning
gensyn-train startOr zero-install:
npx gensyn-train setup --coordinator <coordinator-ip>Prerequisites
- bun (>= 1.0) — JavaScript runtime
- Python 3.9+ — training execution
- pip — Python package manager
- OpenSSL — key generation
Commands
gensyn-train setup
One-time configuration. Does everything:
- Checks system prerequisites (bun, Python 3.9+, pip)
- Downloads AXL P2P node binary for your OS/arch
- Generates ed25519 keypair in
~/.gensyn-train/private.pem - Writes AXL node config with coordinator peer address
- Installs Python dependencies (msgpack, rich, requests)
- Registers your node with the platform
- Saves config to
~/.gensyn-train/config.json
gensyn-train setup --coordinator 34.46.48.224Options:
| Flag | Default | Purpose |
|------|---------|---------|
| --coordinator <host> | 127.0.0.1 | Coordinator AXL peer address |
| --register-url <url> | http://127.0.0.1:3000/api/workers/register | Platform registration endpoint |
gensyn-train start
Launches the trainer node daemon with a live TUI dashboard:
- Starts the AXL P2P node (encrypted mesh networking)
- Starts the worker daemon (waits for jobs, trains, sends results)
- Shows real-time earnings, active job, loss/accuracy, log
gensyn-train startPress Ctrl+C to stop gracefully.
What It Runs
┌──────────────────────────────────────────────────────────────┐
│ gensyn-train ● TRAINING node: 3f9a2c...1e │
├─────────────────────┬────────────────────────────────────────┤
│ EARNINGS │ ACTIVE JOB │
│ Today $0.61 │ Job ID: mnist-cnn-a3f9 │
│ All time $4.06 │ Round: 3 / 10 │
│ Rounds 82 │ Epoch 2/3 Loss:0.312 Acc:88.4% │
├─────────────────────┴────────────────────────────────────────┤
│ LOG │
│ [10:44:40] Round 3 started │
│ [10:44:55] Epoch 1 done — loss: 0.401, acc: 84.2% │
└──────────────────────────────────────────────────────────────┘How It Works
Your machine becomes a trainer node in the TrainMesh network:
- The Coordinator sends you a training job (script + data shard URL) over the AXL P2P mesh
- Your machine downloads the data shard, runs the training script locally
- Your machine sends updated model weights back to the Coordinator
- You earn x402 micropayments per completed training round
Your data never leaves your machine. Training scripts run as subprocesses — any ML framework works.
Files
Everything lives under ~/.gensyn-train/:
~/.gensyn-train/
├── config.json # worker ID, keys, wallet
├── private.pem # ed25519 private key (600 perms)
├── node-config.json # AXL P2P node config
├── axl-node # AXL binary (OS-specific)
└── jobs/ # job artifacts per job_idSupported Platforms
| OS | Arch | Status | |----|------|--------| | macOS | arm64 (Apple Silicon) | Supported | | macOS | amd64 (Intel) | Supported | | Linux | amd64 | Supported | | Linux | arm64 | Supported | | Windows | — | Not supported |
Security
- All network traffic encrypted via AXL (Yggdrasil + TLS)
- Private key stored with
0600permissions - Training scripts run without sandbox (trusted network)
- HTTP API locked to
127.0.0.1
