modelreins-worker
v4.4.4
Published
Connect any machine to your ModelReins AI fleet. One command, zero dependencies.
Maintainers
Readme
modelreins-worker
Connect any machine to your ModelReins AI fleet. One command. Zero dependencies.
npx modelreins-worker --server https://app.modelreins.com --token YOUR_TOKENThat's it. Your machine is now a worker in your fleet.
What it does
ModelReins Worker turns any machine -- laptop, $3 VPS, server, Proxmox LXC, whatever -- into a node in your AI fleet. It polls for jobs, runs them using your installed AI tools (Claude Code, Aider, Codex, Ollama), and streams output back in real-time.
- Pull-based -- works behind NAT, firewalls, VPNs. No inbound ports needed.
- Zero dependencies -- pure Node.js, nothing to install beyond the runtime.
- Any platform -- macOS, Linux, Windows. x64 or ARM. Bare metal or container.
- Any AI tool -- Claude Code, Aider, Codex, Ollama, or any CLI that takes a prompt.
Install
# run directly (no install needed)
npx modelreins-worker --server URL --token TOKEN
# or install globally
npm install -g modelreins-worker
modelreins-worker --server URL --token TOKENFirst-time setup
Interactive setup saves config so you don't need flags every time:
modelreins-worker --setupCreates ~/.modelreins/config. After that, just run:
modelreins-workerOptions
--server <url> ModelReins server URL
--token <token> API token (from Settings > API Keys)
--name <name> Worker name (default: worker-<hostname>)
--provider <type> AI tool: claude, aider, codex, ollama-cli, ollama-http, 1minai, nocturne
--model <model> Model: opus, sonnet, haiku, gpt-4, etc.
--tags <tags> Capabilities: code,review,test,deploy
--workdir <path> Working directory for jobs
--poll <ms> Poll interval in ms (default: 5000)
--setup Interactive first-time setupAll options also available as environment variables (MODELREINS_URL, MODELREINS_TOKEN, etc.).
Providers
Workers support multiple AI providers. CLI-based providers spawn a local tool. HTTP-based providers make API calls — no tools to install.
| Provider | Type | Needs installed |
|----------|------|-----------------|
| claude | CLI | Claude Code |
| aider | CLI | Aider |
| codex | CLI | Codex |
| ollama-cli | CLI | Ollama |
| ollama-http | HTTP | nothing |
| 1minai | HTTP | nothing |
| nocturne | HTTP | nothing |
HTTP providers only need Node.js. A $3 VPS with nothing else installed can run 1minAI or Ollama HTTP jobs.
# switch to 1minAI
modelreins-worker config set provider 1minai
modelreins-worker config set onemin_api_key sk-YOUR-KEY
modelreins-worker config set onemin_model gpt-4o
# switch to local Ollama
modelreins-worker config set provider ollama-http
modelreins-worker config set ollama_host http://192.168.1.50:11434
modelreins-worker config set ollama_model llama3Config management
Config lives at ~/.modelreins/config. Your secrets never leave your machine.
modelreins-worker config list # show all (secrets masked)
modelreins-worker config set <key> <v> # set a value
modelreins-worker config get <key> # get a value
modelreins-worker config delete <key> # remove a key
modelreins-worker config reset # delete all config
modelreins-worker config path # print config file pathChanges are picked up automatically — the worker watches the config file and hot-reloads without restart. Provider changes are applied between jobs (never interrupts running work).
SDK
Use the Worker class in your own code:
const { Worker } = require('modelreins-worker');
const worker = new Worker({
url: 'https://app.modelreins.com',
token: 'your-token',
name: 'my-worker',
type: 'claude',
model: 'opus',
tags: 'code,deploy',
});
worker.on('job:start', ({ id, prompt }) => console.log(`Job #${id}: ${prompt}`));
worker.on('job:done', ({ id, exitCode }) => console.log(`Job #${id} done`));
worker.start();Custom executor
Handle jobs with your own logic instead of spawning a CLI tool:
const worker = new Worker({
url: 'https://app.modelreins.com',
token: 'your-token',
executor: async (job, { output }) => {
await output(`Processing: ${job.prompt}`, 'stdout');
// your logic here
return 0; // exit code
},
});
worker.start();Environment variables
| Variable | Default | Description |
|----------|---------|-------------|
| MODELREINS_URL | http://localhost:8484 | Server URL |
| MODELREINS_TOKEN | | Auth token |
| MODELREINS_WORKER | worker-<hostname> | Worker name |
| MODELREINS_WORKER_TYPE | claude | Tool type |
| MODELREINS_WORKER_MODEL | | Model name |
| MODELREINS_WORKER_TAGS | | Capability tags |
| MODELREINS_POLL_MS | 5000 | Poll interval (ms) |
| MODELREINS_WORKDIR | cwd | Working directory |
| MODELREINS_COMMAND | claude | Binary to run |
| MODELREINS_PROMPT_ARG | -p | Prompt flag |
| MODELREINS_EXTRA_ARGS | --output-format stream-json | Extra CLI args |
Events
| Event | Data | Description |
|-------|------|-------------|
| starting | { name, url } | Worker initializing |
| connected | { name } | Connected to server |
| ready | { name } | Polling for jobs |
| job:start | { id, prompt } | Job started |
| job:done | { id, exitCode } | Job completed |
| job:error | { id, error } | Job failed |
| output | { jobId, content, stream } | Output from job |
| error | { type, error } | Non-fatal error |
| stopped | { name } | Worker shut down |
Requirements
- Node.js 18+
- An AI tool installed (Claude Code, Aider, etc.) unless using a custom executor
- A ModelReins server
License
BSL-1.1
