@crowdform/tulip-runtime-agent
v0.1.9
Published
Node.js agent that runs on each Tulip DigitalOcean droplet. It maintains the connection between a runtime instance and the Tulip control plane.
Readme
@crowdform/tulip-runtime-agent
Node.js agent that runs on each Tulip DigitalOcean droplet. It maintains the connection between a runtime instance and the Tulip control plane.
Responsibilities
- Heartbeat — reports health and system metrics to the control plane every 30 seconds
- Command polling — polls for queued commands and executes allowlisted operations
- Agent API — embedded HTTP + WebSocket server for file system access and an interactive terminal into the OpenClaw container
Automatic deployment
The agent is deployed automatically when you provision a runtime via the Tulip dashboard. The cloud-init bootstrap script:
- Installs Node.js 22, Docker, and cloudflared on the droplet
- Calls
/api/runtime/bootstrapon the control plane to receive credentials - Writes
/opt/tulip/agent/.envwith the agent configuration - Pre-warms the npx cache:
npx --yes @crowdform/tulip-runtime-agent - Starts
tulip-agent.servicevia systemd
You do not need to install or configure the agent manually in normal usage.
Manual installation
If you need to run the agent on a machine outside of the standard provision flow:
npx --yes @crowdform/tulip-runtime-agentThe agent reads all configuration from environment variables. Create a .env file or export them in your shell before running.
Required environment variables
| Variable | Description |
|---|---|
| CONTROL_PLANE_BASE_URL | URL of the Tulip control plane, e.g. https://tulip.example.com |
| INSTANCE_ID | Runtime instance ID, e.g. tulip-abc12345 |
| ORG_ID | Firestore org ID |
| RUNTIME_AUTH_TOKEN | Auth token issued by the control plane at bootstrap |
Optional environment variables
| Variable | Default | Description |
|---|---|---|
| OPENCLAW_HEALTH_URL | http://127.0.0.1:3000/health | Health endpoint of the OpenClaw container |
| OPENCLAW_GATEWAY_TOKEN | (empty) | Token for the OpenClaw gateway, forwarded in heartbeats |
| OPENCLAW_IMAGE | ghcr.io/tulipai/openclaw:latest | OpenClaw Docker image reported in heartbeats |
| HEARTBEAT_INTERVAL_SEC | 30 | How often to send a heartbeat |
| COMMAND_POLL_INTERVAL_SEC | 15 | How often to poll for queued commands |
| AGENT_API_PORT | 0 (disabled) | Port for the embedded Agent API server |
| AGENT_API_TOKEN | (empty) | Bearer token required to access the Agent API |
Copy .env.example as a starting point:
cp .env.example .env
# edit .env with your valuesSystemd service (on droplets)
The agent runs as tulip-agent.service. The correct unit file is:
[Unit]
Description=Tulip Runtime Agent
After=network-online.target openclaw.service
Wants=network-online.target
Requires=openclaw.service
[Service]
Type=simple
WorkingDirectory=/opt/tulip/agent
Restart=always
RestartSec=10
EnvironmentFile=/opt/tulip/agent/.env
Environment=HOME=/opt/tulip/agent
Environment=NPM_CONFIG_CACHE=/opt/tulip/agent/.npm
ExecStart=/usr/bin/npx --yes @crowdform/tulip-runtime-agent
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
WorkingDirectoryis required. Without it systemd starts the process from/, which causesnpxto resolve the wrong cache path and the agent to exit immediately with status 1.
Common management commands:
# View live logs
journalctl -u tulip-agent -f
# Check status
systemctl status tulip-agent -l --no-pager
# Restart the agent
systemctl restart tulip-agent
# Inspect the effective unit (verify WorkingDirectory is set)
systemctl cat tulip-agent
# View the agent environment
cat /opt/tulip/agent/.envTroubleshooting
Service crash loop (exit code 1, no error in logs)
Symptom: tulip-agent.service restarts every ~10 seconds. journalctl -u tulip-agent shows the process exiting immediately with no useful message. Running the agent manually works fine.
Cause: Missing WorkingDirectory in the unit file. Systemd starts npx from /, which breaks the npm cache path resolution.
Fix:
# Confirm the unit has WorkingDirectory set
systemctl cat tulip-agent | grep WorkingDirectory
# Should print: WorkingDirectory=/opt/tulip/agent
# If missing, edit the unit
sudo systemctl edit --force tulip-agent
# Add WorkingDirectory=/opt/tulip/agent under [Service], then:
sudo systemctl daemon-reload
sudo systemctl restart tulip-agentManual test to isolate agent vs systemd
cd /opt/tulip/agent
set -a && source .env && set +a
/usr/bin/npx --yes @crowdform/tulip-runtime-agentIf this runs but the service does not, the problem is in the unit file execution context.
Verify the agent is running and listening
# Process running?
ps aux | grep tulip-runtime-agent
# Port open?
ss -ltnp | grep 18790
# Authenticated health check
curl -H "Authorization: Bearer $AGENT_API_TOKEN" http://localhost:18790/healthz
# Expected: {"ok":true}Expected warnings on non-Linux hosts (local dev only)
When running locally on macOS the agent prints harmless warnings from the metrics collector probing Linux /proc files:
cat: /proc/uptime: No such file or directory
grep: /proc/meminfo: No such file or directory
df: invalid option -- BThese do not affect functionality.
Commands
The agent executes only allowlisted command types dispatched from the control plane:
| Command | Action |
|---|---|
| restart_openclaw | Runs systemctl restart openclaw |
| restart_cloudflared | Runs systemctl restart cloudflared |
| rebootstrap | Re-runs /opt/tulip/bootstrap.sh — useful for token rotation |
| update_agent | Downloads the latest agent version and schedules a service restart |
Commands are polled from GET /api/runtime/commands and results are posted to POST /api/runtime/commandResult.
Agent API server
When AGENT_API_PORT and AGENT_API_TOKEN are set, the agent exposes a local HTTP + WebSocket server. On provisioned droplets this is accessible at {instanceId}-api.tulip.md via a second Cloudflare tunnel ingress.
All requests require Authorization: Bearer <AGENT_API_TOKEN>.
Endpoints
| Method | Path | Description |
|---|---|---|
| GET | /healthz | Returns {"ok":true} |
| GET | /v1/fs/list?path=<path> | List directory contents inside the OpenClaw container |
| GET | /v1/fs/read?path=<path> | Read a file from the OpenClaw container (max 512 KB) |
| PUT | /v1/fs/write?path=<path> | Write a file (allowlisted paths only, max 512 KB) |
| WS | /v1/terminal | Interactive terminal via docker exec into the OpenClaw container |
File paths must be under /home/node/.openclaw. Write access is limited to:
/home/node/.openclaw/openclaw.json/home/node/.openclaw/agents/
WebSocket terminal messages:
// Send input
{ "type": "input", "data": "ls -la\n" }
// Resize terminal
{ "type": "resize", "cols": 120, "rows": 40 }
// Receive output
{ "type": "output", "data": "..." }
// Session ended
{ "type": "exit", "code": 0 }Local development
pnpm install
cp .env.example .env
# Fill in CONTROL_PLANE_BASE_URL, INSTANCE_ID, ORG_ID, RUNTIME_AUTH_TOKEN
pnpm devpnpm dev runs the agent directly with tsx using your .env file. The control plane must be running at the configured CONTROL_PLANE_BASE_URL (default http://localhost:3001).
Building
pnpm build # outputs to dist/The package is published to npm as @crowdform/tulip-runtime-agent and is invoked on droplets via npx --yes @crowdform/tulip-runtime-agent.
