maelyclaw
v0.1.0
Published
Opinionated autonomous software engineering runtime with operator messaging, scheduling, and pluggable providers
Downloads
139
Maintainers
Readme
MaelyClaw
Autonomous dev engine that plans, builds, tests, and ships while you sleep.
MaelyClaw is an opinionated autonomous software-engineering runtime built around five independent lanes for planning, execution, QA/closure, health monitoring, operator feedback, and self-improvement. It runs as a single Node.js process, spawns coding workers, validates work through QA, and opens PRs — all without human intervention. Multi-provider: Claude (via official Agent SDK), Codex, or any OpenAI-compatible model.
In v0.1, the shipped operator messaging integration is Slack. Slack is required today. Other messaging integrations may be added in future releases.
4K lines of TypeScript. Apache 2.0 license.
Built by Maely — we use MaelyClaw to ship our own healthcare platform autonomously.
How It Works
BACKLOG → Planner → Worker → QA → Closer → PR → Main
↑ ↓
Feedback ←── Operator ──→ Health- Lane-planner picks the highest-priority task from a backlog
- Spawns a coding worker (Claude, Codex, or any model) in a git worktree
- Worker writes code, commits, opens a feature PR
- Lane-closer spawns a QA worker to validate
- If QA fails, auto-spawns a fix worker (up to 2 attempts)
- If QA passes, merges the feature PR and opens a final PR to main
- Lane-health monitors worker lifecycle — handles timeouts, stale workers, respawns
- Lane-feedback processes operator messages from Slack
- Lane-improve runs self-improvement: doc drift detection, eval cases, learning promotion
Each lane runs on its own cron schedule. They don't coordinate. They don't block each other.
MaelyClaw vs OpenClaw vs NanoClaw
| | MaelyClaw | OpenClaw | NanoClaw | |---|---|---|---| | What it is | Autonomous dev engine | Agent platform | Chatbot framework | | Autonomy | Plans, builds, tests, ships end-to-end | Requires custom setup for 24/7 autonomy | Responds to messages | | Architecture | Single process, 5 lanes | Microservices (Gateway + nodes) | Single process, channel-based | | Code size | ~4K lines TypeScript | ~500K lines | ~4K lines TypeScript | | Providers | Claude Agent SDK, Codex, OpenAI-compat | API-only (OAuth banned) | Claude Agent SDK | | QA loop | Built-in: auto-QA, auto-fix, retry cap | Manual or custom scripts | None | | Worker lifecycle | Timeout, stale reaping, respawn, reconciliation | Basic | Container per message | | Channels | Slack (v0.1) | WhatsApp, Telegram, Slack, Discord, Gmail | WhatsApp, Telegram, Slack, Discord, Gmail | | Setup time | ~15 minutes | Hours to days | ~10 minutes |
Features
5 Independent Lanes
Planner, Health, Closer, Feedback, Improve — each on its own cron schedule. No coordination overhead. Each lane is a Claude/Codex session that reads a markdown contract and executes independently. The bundled starter workspace ships this lane set as the default operating model, and the current state schema, console terminology, and memory heuristics assume it.
Multi-Provider
Claude (via official Agent SDK and CLI), Codex CLI, or any OpenAI-compatible endpoint. Mix providers per role: Claude or Codex for lanes and workers, Claude or OpenAI-compatible models for summarization, and hosted or local embedding models for memory retrieval.
Full Dev Pipeline
Plans work from a backlog, spawns workers in git worktrees, runs QA validation, auto-fixes failures (up to 2 attempts), merges feature PRs, opens final PRs to main. End-to-end.
Worker Lifecycle Management
5-minute inactivity timeout, 2-hour hard cap, stale worker reaping, startup reconciliation (respawns interrupted workers after restart), liveness checks.
Operator Console (TUI + Web)
Real-time dashboard with live lane status, cycle tracking, worker monitoring, and log streaming — plus an embedded AI terminal (Claude Code, Codex, or shell) with MaelyClaw MCP tools for direct engine control. Works as a terminal TUI (maelyclaw-console) and a browser web UI (maelyclaw-web). See console/README.md for full docs.

MCP Server
MaelyClaw-specific tools auto-discovered by Claude Code, Codex, or any MCP-aware tool. Engine status, lane control, worker management, cycle gate checks, and explicit memory retrieval/maintenance are all available as MCP tools.
Quick Start
Prerequisites
- Node.js 22+
- A Slack app with Socket Mode enabled (create one) for operator messaging, approvals, alerts, and conversation
- At least one provider path: Claude Code CLI, Codex CLI, or an OpenAI-compatible endpoint such as Ollama
Install
Install the Node CLIs from npm:
npm install -g maelyclawCreate a starter agent workspace:
maelyclaw init ~/.maelyclaw/agent-workspaceInstall the Python operator console from PyPI:
maelyclaw console-installThat helper uses uv when available and falls back to pipx. You can also install it directly:
uv tool install maelyclaw-console
# or
pipx install maelyclaw-consoleClone the repo only if you want a full development checkout:
git clone https://github.com/maelyai/maelyclaw.git
cd maelyclaw
npm install
npm run buildIf you cloned the repo and want the local development version of the operator console:
npm run console:installVerify:
maelyclaw --help
maelyclaw-tool --help
maelyclaw console-install --helpConfigure
If you cloned the repo, you can start from the example file:
mkdir -p ~/.maelyclaw
cp maelyclaw.config.example.json ~/.maelyclaw/maelyclaw.config.jsonIf you installed from npm without a repo checkout, create ~/.maelyclaw/maelyclaw.config.json manually using the example below.
If you used maelyclaw init, customize the generated workspace before your first unattended run:
- update
SOUL.md,USER.md,BACKLOG.md, andPROGRESS.md - edit
docs/lane-*.mdto match your repo, branch, PR, and QA workflow - keep the default lane names unless you also plan to update the current heuristics and docs that assume them
Recommended default stack:
claudeopusfor lanes, workers, and conversationsclaudehaikufor summarization- local Ollama
qwen3:8bfor durable-memory curation - local Ollama
qwen3-embedding:0.6bfor semantic memory retrieval
Example:
{
"agentDir": "/path/to/your/agent-workspace",
"targetRepoDir": "/path/to/your/target-repo",
"providers": {
"ollama": {
"type": "openai-compat",
"baseUrl": "http://localhost:11434/v1",
"apiKey": "ollama"
}
},
"defaults": {
"lane": {
"provider": "claude",
"model": "opus"
},
"worker": {
"provider": "claude",
"model": "opus"
},
"conversation": {
"provider": "claude",
"model": "opus"
},
"summarizer": {
"provider": "claude",
"model": "haiku"
}
},
"memory": {
"search": {
"provider": "ollama",
"model": "qwen3-embedding:0.6b",
"prewarm": true
},
"curation": {
"enabled": true,
"provider": "ollama",
"model": "qwen3:8b",
"maxActiveClaims": 80,
"maxRenderedClaims": 40
}
},
"slack": {
"botToken": "xoxb-your-bot-token",
"appToken": "xapp-your-app-level-token",
"botUserId": "U0XXXXXXXXX",
"primaryChannel": "C0XXXXXXXXX"
},
"logging": {
"dir": "/path/to/agent-workspace/.maelyclaw/logs",
"level": "info",
"maxFileBytes": 10485760,
"maxFiles": 5,
"artifactByteLimit": 2097152
},
"stateDir": "/path/to/agent-workspace/.maelyclaw",
"ipcSocketPath": "/tmp/maelyclaw.sock"
}Providers are connection definitions only. Model choice now lives on each task surface: defaults.lane, defaults.worker, defaults.conversation, defaults.summarizer, memory.search, and memory.curation.
Logging is configured separately under logging; by default MaelyClaw writes durable logs under <stateDir>/logs/.
The summarizer can use either a Claude provider or an OpenAI-compatible provider. If you set "summarizer": { "provider": "claude" }, MaelyClaw uses haiku unless you set a specific summarizer model.
claude and codex are built-in provider names, so you only need to define providers.* entries for external connections such as Ollama or hosted OpenAI-compatible endpoints.
The examples use Claude aliases (opus, haiku) because they are the most reliable CLI-facing model names right now. If you prefer pinned model IDs, set them explicitly.
The memory index always builds a local lexical search index. If memory.search.provider and memory.search.model are configured, MaelyClaw also stores embeddings and upgrades memory_search to hybrid lexical + semantic retrieval.
If that provider points at a local Ollama daemon, MaelyClaw first tries the OpenAI-compatible /v1/embeddings path and falls back to Ollama's native /api/embed path when needed.
For local Ollama semantic retrieval, the embedding model itself still needs to be available to the Ollama daemon, even if your chat model is cloud-backed. For the recommended stack, pull both models:
ollama pull qwen3:8b
ollama pull qwen3-embedding:0.6bDurable memory curation is enabled by default. If both memory.curation.provider and memory.curation.model are configured against an OpenAI-compatible endpoint, including a local Ollama daemon, MaelyClaw promotes new memory batches into structured entities and claims before regenerating the managed section in MEMORY.md. Without that explicit model config, it falls back to conservative heuristic promotion for obvious durable facts.
If you want the simplest possible startup, you can omit providers and memory entirely and run on built-in Claude or Codex with lexical retrieval plus heuristic curation.
Ollama Onboarding
If you use the recommended memory stack, get Ollama working before you start MaelyClaw.
- Install Ollama.
macOS / Windows: download the official installer from
https://ollama.com/downloadLinux:curl -fsSL https://ollama.com/install.sh | sh - Make sure the local daemon is running.
macOS / Windows: launch the Ollama app
Linux:
ollama serve - Pull the recommended models.
ollama pull qwen3:8b
ollama pull qwen3-embedding:0.6b- Verify the daemon and local model registry.
curl http://localhost:11434/api/version
curl http://localhost:11434/api/tags- Smoke-test the exact model paths MaelyClaw uses.
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen3:8b",
"messages": [{"role": "user", "content": "Reply with exactly OLLAMA_CHAT_OK"}]
}'
curl http://localhost:11434/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"model": "qwen3-embedding:0.6b",
"input": "Why is the sky blue?"
}'Auth notes:
- No auth is required for local access on
http://localhost:11434. ollama signinis only needed if you plan to use:cloudmodels, private models, or other ollama.com-backed features.- Direct calls to
https://ollama.com/apirequire an API key viaOLLAMA_API_KEY, but MaelyClaw's recommended local stack does not need that.
If v1/embeddings is unavailable on your local daemon, MaelyClaw falls back to Ollama's native /api/embed path for embeddings.
Run
maelyclaw start --config ~/.maelyclaw/maelyclaw.config.jsonIn another terminal:
maelyclaw-tool status
maelyclaw-tool sessions-listTesting
Default verification:
npm test
npm run console:testnpm run console:test bootstraps console/.venv on first run.
Live provider integration checks:
npm run test:integration:ollama
npm run test:integration:claude
npm run test:integration:codex
npm run test:integration:allnpm test excludes the live integration suite. The integration commands hit real providers and require the relevant local auth or endpoints to be available.
Operator Console
Install (one-time):
npm run console:installThis creates a Python venv and symlinks maelyclaw-console and maelyclaw-web into bin/.
Terminal TUI:
bin/maelyclaw-console --provider claudeWeb UI (browser):
bin/maelyclaw-web --provider claude
# Opens at http://localhost:9090The console shows live lane status, cycle progress, worker tracking, and color-coded log streaming. The embedded terminal runs Claude Code (or Codex) with MaelyClaw MCP tools for direct engine control — check status, search logs, inspect memory, enable/disable lanes, kill workers, and view cycles from natural language.
AI conversation state persists across browser refreshes. See console/README.md for keybindings, layout, and configuration.
Agent Directory
MaelyClaw looks for these files in agentDir:
| File | Purpose |
|------|---------|
| config/lane-contracts.json | Lane definitions (name, schedule, timeout, prompt snippets, and optional per-lane provider/model overrides) |
| SOUL.md | Agent personality and values |
| USER.md | Durable notes about the operator and communication preferences |
| PROGRESS.md | Current work state snapshot |
| MEMORY.md | Manual durable notes plus a managed curated section of active long-term memory |
| BACKLOG.md | Prioritized work items |
| memory/ | Generated daily snapshots, daily episode archives, the local retrieval index, and structured durable knowledge |
A working starter workspace lives under examples/starter-agent. If you are onboarding from scratch, copy that directory and customize it rather than starting with an empty agentDir.
Missing identity files are skipped. Lane execution requires config/lane-contracts.json.
Optional provider-native instruction files such as CLAUDE.md and AGENTS.md can still be useful, but MaelyClaw does not load them directly. Put them in the working directories your provider CLIs actually run from.
Lane contracts may optionally set a lane-specific selection:
{
"name": "lane-health",
"schedule": "5,20,35,50 * * * *",
"timeoutSeconds": 1200,
"selection": {
"provider": "claude",
"model": "sonnet"
}
}If selection is omitted, the lane inherits defaults.lane. This is the clean way to run heavier models for judgment-heavy lanes such as planning and lighter models for mechanical lanes such as health checks.
See examples/starter-agent/config/lane-contracts.json for a complete example of the default five-lane setup.
Memory System
MaelyClaw uses a tiered memory model so recent context stays cheap to load while full history stays available on disk.
Files
| Path | Purpose | Loaded into prompt? |
|------|---------|---------------------|
| MEMORY.md | Durable long-term memory: manual notes plus a managed curated section of active durable facts | Yes |
| memory/YYYY-MM-DD.md | Compact generated snapshot for a given day | Today + yesterday |
| memory/archive/YYYY-MM-DD.md | Distilled daily episode archive with important outcomes, decisions, blockers, and evidence refs | No |
| memory/knowledge.json | Structured durable memory store: entities, claims, relations, evidence, supersession state | No |
| memory/.index.sqlite | Workspace-local lexical/semantic retrieval index | No |
Compatibility note:
MaelyClaw can still read legacy top-level daily files at memory/YYYY-MM-DD.md when no archive exists yet. Once memory/archive/ contains files, the archive becomes the canonical historical source for retrieval and curated-memory rebuilds.
Write Path
Memory is written from two sources:
- Conversation turns from Slack/operator chat
- Significant lane outcomes from planner, health, closer, feedback, and improve runs
Entries are buffered in memory and flushed either when the buffer reaches 10 items or every 5 minutes. On each flush:
- The batch is summarized into compact markdown blocks.
- The summary is appended to
memory/archive/YYYY-MM-DD.mdas a distilled episodic record. - The top-level
memory/YYYY-MM-DD.mdfile is regenerated as a bounded snapshot with:- a short summary/header
- a few earlier highlights
- the most recent detailed blocks
- The new archive blocks are passed through durable-memory curation:
- structured entities and claims are extracted
- singleton facts supersede older values deterministically
- evidence and relations are stored in
memory/knowledge.json - the managed curated section inside
MEMORY.mdis regenerated from active claims
- The local retrieval index is refreshed so
memory_searchsees the updated durable memory immediately.
If the summarizer is unavailable or fails, MaelyClaw falls back to a deterministic compact formatter instead of dumping raw transcripts.
For lane output specifically, MaelyClaw now tries to promote only high-signal episodes into the archive:
- root causes, fixes, durable blockers, and explicit decisions
- merged/opened PRs and meaningful workflow changes
- evidence pointers back to runtime logs or lane artifacts
Routine health snapshots, queue counts, "all clear" runs, and other operational churn stay in logs instead of being copied into memory.
The curated MEMORY.md section is compilable from memory/knowledge.json, so durable memory can be reconstructed even if the markdown file is edited or partially removed.
If you enable curation later or want to refresh durable memory from older archives, memory_rebuild_curated rebuilds the structured store from historical daily archives.
Keep manual durable notes outside the managed markers in MEMORY.md or in USER.md; MaelyClaw regenerates only the managed curated section.
Read Path
At runtime, MaelyClaw builds agent context from:
SOUL.mdUSER.mdPROGRESS.mdMEMORY.mdmemory/<today>.mdmemory/<yesterday>.md
Only the compact daily snapshots are injected into prompts. The full archives are deliberately not loaded by default, because they grow quickly and would bloat every lane/conversation context.
If a task needs older or exact historical detail, the intended pattern is to search or read MEMORY.md and memory/archive/*.md explicitly rather than relying on the default prompt snapshot alone.
The prompt-visible MEMORY.md file is no longer a raw append-only scratchpad. It is a compiled durable surface built from active structured claims, which keeps long-term memory compact while letting the underlying knowledge store preserve provenance and supersession history.
Behind the scenes, MaelyClaw maintains a workspace-local memory index at memory/.index.sqlite. The index is refreshed on startup, after memory flushes, and lazily on search if files changed. Retrieval works at memory-block/chunk granularity, not whole-file granularity.
memory_search now uses a hybrid ranking pipeline:
- SQLite FTS5 for keyword, ID, branch, filename, and exact-token matches
- Optional embeddings for semantic matches when wording differs
- Reciprocal-rank style fusion between lexical and semantic candidates
- Additional boosts for exact IDs, explicit dates, durable memory, and recency
- Diversity limits so top hits are not dominated by one file or one oversized section
Durable curation adds a second layer on top of retrieval:
- new memory is promoted into explicit entities, claims, and relations
- facts can be updated without duplicating stale versions in prompt-visible memory
- evidence remains attached to each claim in
memory/knowledge.json MEMORY.mdstays readable because it is compiled from active claims, not raw chronological logs
MaelyClaw now exposes the same explicit memory operations across surfaces:
- OpenAI-compatible function tools:
memory_search,memory_get,memory_status,memory_reindex,memory_rebuild_curated - Bridge CLI:
maelyclaw-tool memory-search,memory-get,memory-status,memory-reindex,memory-rebuild-curated - MCP server tools:
maelyclaw_memory_search,maelyclaw_memory_get,maelyclaw_memory_status,maelyclaw_memory_reindex,maelyclaw_memory_rebuild_curated
That means Claude, Codex, and OpenAI-compatible runs can all use the same retrieval and maintenance workflow instead of falling back to raw grep on memory files.
Logging System
MaelyClaw now has a real on-disk logging subsystem rather than relying on process stdout as the only source of truth.
Files
By default logs live under <stateDir>/logs/, or under logging.dir if you set it explicitly.
| Path | Purpose |
|------|---------|
| maelyclaw.log | Human-readable rolling text log for operators, console tailing, and quick inspection |
| maelyclaw.jsonl | Structured rolling JSONL log for search and agent debugging |
| .index.sqlite | Local SQLite/FTS index over structured logs and persisted artifacts |
| artifacts/lanes/<lane>/*.stdout.log | Persisted lane stdout artifacts |
| artifacts/lanes/<lane>/*.stderr.log | Persisted lane stderr artifacts |
The logger mirrors existing console.log/warn/error calls into both files, so current subsystems become durable immediately. Lane completions also persist stdout/stderr artifacts and record their paths in the structured log. A local SQLite/FTS index is rebuilt incrementally over maelyclaw.jsonl and the persisted artifact files, so agents can search runtime history without brute-force rescanning raw logs every time.
Why Two Log Formats
maelyclaw.logis the human surface: stable, tail-friendly, and what the console displays.maelyclaw.jsonlis the machine surface: structured fields like time, level, subsystem, stack, and metadata make search/filter/debugging reliable for agents..index.sqliteis the retrieval surface:logs-searchuses it to query structured log fields plus lane artifact content, andlogs-getuses it to fetch exact evidence.
This is the same broad pattern used by systems like OpenClaw and Hermes: one central logger, durable on-disk logs, and operator-friendly tail/search commands.
Log Ops
Provider-neutral log operations are available everywhere:
- Bridge CLI:
maelyclaw-tool logs-tail,logs-search,logs-status - Bridge CLI exact retrieval:
maelyclaw-tool logs-get - MCP tools:
maelyclaw_log_tail,maelyclaw_log_search,maelyclaw_log_get,maelyclaw_log_status - OpenAI-compatible function tools:
log_tail,log_search,log_get,log_status
Use logs when debugging runtime behavior, stuck workers, Slack issues, memory failures, or lane crashes. Memory is for durable facts; logs are for operational truth.
Why It Works This Way
- Keeps the default prompt bounded even on heavy autonomous runs
- Keeps logs as canonical runtime truth and memory as higher-signal derived context
- Preserves episodic history without duplicating every operational event into memory
- Separates durable memory from short-term operational memory
- Lets summarization quality improve over time without losing raw runtime evidence
- Makes historical recall inspectable and deterministic instead of relying on model intuition alone
Bridge CLI
maelyclaw-tool has two kinds of commands:
- Engine commands that talk to the running engine over the Unix socket
- Direct memory commands that load
~/.maelyclaw/maelyclaw.config.json(or--config/$MAELYCLAW_CONFIG_PATH) and work even when the engine is not running
Engine commands:
maelyclaw-tool status # engine status
maelyclaw-tool sessions-list # active workers
maelyclaw-tool spawn-worker \ # spawn a worker
--task-name "fix-auth" \
--prompt "Fix the auth middleware" \
--cwd /path/to/repo
maelyclaw-tool send-to-session \ # message a worker
--session-key <KEY> \
--message "run the tests"
maelyclaw-tool lane-enable lane-health # enable/disable lanes
maelyclaw-tool lane-disable lane-improve
maelyclaw-tool kill-worker --worker-id <ID>Direct log commands:
maelyclaw-tool logs-tail --lines 100
maelyclaw-tool logs-search --query "memory-curation" --level warn
maelyclaw-tool logs-search --lane lane-health --kind artifact --query "oauth allowlist"
maelyclaw-tool logs-get --path maelyclaw.jsonl --line 42
maelyclaw-tool logs-search --subsystem scheduler --since 2h
maelyclaw-tool logs-statusDirect memory commands:
maelyclaw-tool memory-search --query "when did MAE-133 close?"
maelyclaw-tool memory-get --path memory/archive/2026-04-09.md --heading '20:45 — Cycle-015 "Local OCR Env Parity" closed'
maelyclaw-tool memory-status
maelyclaw-tool memory-reindex
maelyclaw-tool memory-rebuild-curated--provider accepts built-in claude and codex, plus any named providers.* entry such as ollama.
Run As A Service
Linux (systemd)
# ~/.config/systemd/user/maelyclaw.service
[Unit]
Description=MaelyClaw Agent Engine
After=network-online.target
[Service]
Type=simple
ExecStart=/usr/local/bin/maelyclaw start --config /home/you/.maelyclaw/maelyclaw.config.json
Restart=always
RestartSec=5
StandardOutput=append:/home/you/.maelyclaw/workspace/.maelyclaw/logs/maelyclaw.log
StandardError=append:/home/you/.maelyclaw/workspace/.maelyclaw/logs/maelyclaw.log
[Install]
WantedBy=default.targetsystemctl --user enable maelyclaw
systemctl --user start maelyclaw
systemctl --user status maelyclawmacOS (launchd)
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.maelyclaw.engine</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/node</string>
<string>/path/to/maelyclaw/dist/index.js</string>
<string>start</string>
<string>--config</string>
<string>/Users/you/.maelyclaw/maelyclaw.config.json</string>
</array>
<key>KeepAlive</key>
<true/>
<key>RunAtLoad</key>
<true/>
<key>StandardOutPath</key>
<string>/Users/you/.maelyclaw/workspace/.maelyclaw/logs/maelyclaw.log</string>
<key>StandardErrorPath</key>
<string>/Users/you/.maelyclaw/workspace/.maelyclaw/logs/maelyclaw.log</string>
</dict>
</plist>Provider Notes
| Provider | Auth | Notes |
|----------|------|-------|
| Claude Code CLI | claude login | Uses the official Claude Agent SDK. claude auth status is the quickest sanity check. |
| Codex CLI | codex login | Authenticates locally. |
| OpenAI-compatible | Configured endpoint plus credentials as needed | Works with hosted endpoints and local daemons such as Ollama. |
MaelyClaw is not affiliated with Anthropic, OpenAI, Slack, or OpenClaw. Users are responsible for complying with provider terms.
Migrating From OpenClaw
See MIGRATION.md for a step-by-step guide. The short version: keep your agent workspace, point MaelyClaw at it, carry over Slack credentials, run in foreground first.
Troubleshooting
EADDRINUSE on the socket — rm /tmp/maelyclaw.sock
Slack won't connect — Check bot token scopes, app token has connections:write, botUserId and primaryChannel are correct.
Lanes don't run — Validate config/lane-contracts.json with python3 -m json.tool, then confirm any lane-level selection points at a resolvable provider/model pair.
maelyclaw-tool can't connect — Only the engine-management commands (status, spawn-worker, sessions-list, lane control, etc.) require the engine. The memory-* commands run directly from config. If an engine command fails, check that ipcSocketPath matches between config and CLI.
Need to debug what actually happened — Use logs before inferring from memory:
maelyclaw-tool logs-tail --lines 150
maelyclaw-tool logs-search --query "error" --level warn
maelyclaw-tool logs-search --subsystem memory-curation --since 1h
maelyclaw-tool logs-get --path maelyclaw.jsonl --line 42Provider failures — Re-check local auth (claude login, codex login). Provider auth flows change over time.
Contributing
See CONTRIBUTING.md.
License
Apache-2.0
