@olhapi/maestro-linux-x64-gnu
v0.1.14
Published
Maestro CLI binary for Linux x64 (glibc)
Maintainers
Readme
Maestro
Maestro is a local-first orchestration runtime for agent-driven software work.
Website: maestro.olhapi.com Repository: github.com/olhapi/maestro Docs: maestro.olhapi.com/docs
This project is inspired by openai/symphony.
It combines a SQLite-backed tracker, an orchestrator that reads WORKFLOW.md, a private MCP daemon bridged by maestro mcp, and an HTTP server that serves the embedded dashboard plus JSON/WebSocket APIs.
Maestro stays local-first. External work is translated into Maestro projects and issues through the CLI, the embedded dashboard, or MCP prompts, then supervised through the same local queue, runtime state, MCP tools, and dashboard surfaces.
Docs Website
The docs site is organized around the same operator flow the product uses:
- Install
- Quickstart
- Architecture
- Control center
- Workflow config
- Operations and observability
- CLI reference
- Real Codex E2E harness
Install
npm
Current public npm install on supported platforms:
npm install -g @olhapi/maestroInstall the newest prerelease instead:
npm install -g @olhapi/maestro@nextThe installed command name is still maestro.
Official npm builds currently cover:
| Platform | Arch | Notes | | --- | --- | --- | | macOS | arm64 | native package | | macOS | x64 | native package | | Linux | x64 | glibc only | | Linux | arm64 | glibc only | | Windows | x64 | native package |
Linux npm packages currently target glibc only. Alpine and other musl-based distros should build from source or use Docker.
Docker
Published image:
docker pull ghcr.io/olhapi/maestro:latestThe image entrypoint is maestro. Its default command is run --db /data/maestro.db, so this starts the shared daemon with container defaults:
docker run --rm -v ./data:/data ghcr.io/olhapi/maestro:latestTo run against a mounted repo explicitly:
docker run --rm -v ./repo:/repo -v ./data:/data ghcr.io/olhapi/maestro:latest run --db /data/maestro.db /repo --port 8787Build From Source
For local development or unsupported platforms:
go build -o maestro ./cmd/maestroThis build path is pure Go. You do not need a C compiler or a system SQLite
development package for the standard make build / make test flow.
Local contributor Docker build:
docker build -t maestro-local .Quick Start
1. Initialize a workflow file
maestro workflow init .This writes a repo-local WORKFLOW.md with the default orchestration settings and prompt template.
2. Create a project and queue some work
maestro project create "My App" --repo /absolute/path/to/my-app --desc "Repo-wide Codex guidance: use pnpm, keep changes scoped, and run focused validation for touched packages."
maestro issue create "Add login page" --project <project_id> --labels feature,ui
maestro issue create "Fix auth bug" --project <project_id> --priority 1 --labels bug
maestro issue move ISS-1 readyProjects use the local tracker only.
If you need to import work from another system, ask your MCP-capable agent to translate it into Maestro records. For example:
Take my Jira issues from the "make a react todo app" epic and create the corresponding Maestro project, epics, and issues.
Use the current repo as the project repo path, keep the issues local, and mark the imported work ready.Project descriptions are not just dashboard notes. Maestro passes project.description into every implementation, review, and done prompt by default, so use it for standing requirements, conventions, and validation expectations Codex should keep in mind for every issue.
3. Start the daemon
maestro runWhen --db is omitted, Maestro uses ~/.maestro/maestro.db by default. When --port is omitted, Maestro serves HTTP on http://127.0.0.1:8787.
Running maestro run without repo_path starts the shared daemon for the current database. It does not infer the repo from your shell working directory.
Issue images are stored next to the active database under assets/images. With the default database path, that means ~/.maestro/assets/images. If you run with --db /custom/path/maestro.db, image assets move to /custom/path/assets/images.
The preview warning on run is intentional. Pass --i-understand-that-this-will-be-running-without-the-usual-guardrails only when unattended Codex execution is actually what you want.
4. Install the Maestro skill bundle and add the MCP server to your coding agent
Install the bundled Maestro skill first so Codex and Claude Code can load the repo-specific guidance automatically:
maestro install --skillsThat writes the skill to ~/.agents/skills/maestro for Codex and ~/.claude/skills/maestro for Claude Code.
Then use the setup path that matches your coding agent:
Codex:
codex mcp add maestro -- maestro mcpClaude Code:
claude mcp add maestro -- maestro mcp
claude mcp add --scope project maestro -- maestro mcpOther MCP-capable agents:
{
"mcpServers": {
"maestro": {
"command": "maestro",
"args": ["mcp"]
}
}
}If you built Maestro from source and did not add it to your PATH, replace maestro with the absolute path to the binary.
maestro mcp is a stdio bridge into the live maestro run daemon for the same database. Start maestro run first, then let your coding agent invoke maestro mcp.
Paginated MCP list tools return a pagination object when more results remain. When pagination.has_more is true, call the exact pagination.next_request payload to fetch the next batch instead of guessing the next offset by hand.
5. Open the dashboard or use live CLI helpers
By default, maestro run serves:
- the embedded dashboard on
/ - the live observability API on
/api/v1/* - the dashboard application API on
/api/v1/app/* - the dashboard invalidation stream on
/api/v1/ws
The shared issue composer in the embedded dashboard also supports browser speech-to-text for issue descriptions. In supported Chromium-based browsers it shows live interim text while you speak; elsewhere it degrades to a disabled control without changing the API surface.
Useful live helpers:
maestro status --dashboard --api-url http://127.0.0.1:8787
maestro sessions --api-url http://127.0.0.1:8787
maestro events --api-url http://127.0.0.1:8787 --limit 20
maestro runtime-series --api-url http://127.0.0.1:8787 --hours 24
maestro project start <project_id> --api-url http://127.0.0.1:8787
maestro project stop <project_id> --api-url http://127.0.0.1:8787MCP, Run, and Dashboard Model
maestro run is the long-lived process for a given database. It starts:
- the local issue service and SQLite-backed store
- the orchestrator and agent runner
- a private MCP daemon used by
maestro mcp - the public HTTP server when
--portis set or left at its default
maestro mcp does not start a separate orchestration server. It discovers the live daemon for the same --db and bridges that session over stdio for MCP clients.
Operationally:
- start
maestro runfirst - point
maestro mcpat the same--db - use
--api-urlfor CLI helpers and live control commands that talk to the daemon over HTTP
Common Operator Commands
Queue inspection and filtering:
maestro issue list --state backlog --project <project_id>
maestro issue list --blocked --search auth --sort priority_asc
maestro board --project <project_id>Issue updates:
maestro issue update ISS-1 --labels bug,urgent --priority 1
maestro issue update ISS-1 --branch codex/ISS-1 --pr-url https://example.com/pull/123
maestro issue blockers set ISS-1 ISS-2 ISS-3
maestro issue unblock ISS-1 ISS-2Issue images:
maestro issue images add ISS-1 ./screenshots/failing-checkout.png
maestro issue images list ISS-1
maestro issue images remove ISS-1 <image_id>Image attachments are local-only for every issue. Maestro accepts PNG, JPEG, WEBP, and GIF files up to 10 MiB each and serves them back through the local HTTP API and dashboard.
Recurring automation:
maestro issue create "Sync GitHub ready-to-work" \
--project <project_id> \
--type recurring \
--cron "*/15 * * * *" \
--desc "Check the GitHub project for issues labeled ready-to-work and create corresponding Maestro issues when they do not already exist."
maestro issue list --type recurring --wide
maestro issue run-now ISS-42 --api-url http://127.0.0.1:8787Recurring issues are Maestro-native issues with a cron schedule in the daemon host's local timezone. The orchestrator will enqueue at most one catch-up run after downtime, will not overlap active runs, and coalesces extra schedule hits into a single pending rerun.
Readiness checks:
maestro verify --repo /absolute/path/to/my-app
maestro doctor --repo /absolute/path/to/my-app
maestro spec-check --repo /absolute/path/to/my-appWorkflow Basics
WORKFLOW.md is the repo-local source of truth for orchestration behavior. It covers:
- tracker settings
- workspace root
- hook commands and timeout
- agent concurrency, mode, retry limits, and dispatch behavior
- optional review and done phase prompts
- Codex command and sandbox settings
- the prompt template rendered for each issue
Fresh maestro workflow init --defaults output currently defaults to:
tracker.kind: kanbanpolling.interval_ms: 10000workspace.root: ~/.maestro/worktreesagent.max_concurrent_agents: 3agent.max_turns: 4agent.max_retry_backoff_ms: 60000agent.max_automatic_retries: 8agent.mode: app_serveragent.dispatch_mode: parallelcodex.command: codex app-servercodex.approval_policy: nevercodex.initial_collaboration_mode: defaultfor freshapp_serverthreadsphases.review.enabled: truephases.done.enabled: true- runtime permission profiles now live in the DB per project/issue instead of
WORKFLOW.md
If you do not want to preinstall Codex, Maestro can automatically fall back to the pinned npx -y @openai/[email protected] app-server form when the configured direct Codex command does not match the supported schema version.
initial_collaboration_mode: default keeps unattended runs execution-first for a fresh app_server thread. Use plan only when you explicitly want a plan-gated startup mode. Interactive approvals and requestUserInput prompts still depend on using a non-never approval policy, and those prompts are queued through the dashboard's global interrupt panel. Resumed threads and stdio runs do not use that startup-mode path.
codex.approval_policy: never applies to Maestro-managed app-server turns. It does not automatically suppress Codex's separate trust prompts for attached external MCP servers such as maestro mcp; those prompts still depend on the MCP client's local trust settings and the tool annotations advertised by the server.
Interactive maestro workflow init now walks through workspace.root, codex.command, agent.mode, agent.dispatch_mode, agent.max_concurrent_agents, agent.max_turns, and agent.max_automatic_retries, then asks for codex.approval_policy and codex.initial_collaboration_mode only for app_server.
Enum prompts now render numbered menus. You can press Enter to keep the default, or enter the number, an alias, a unique prefix, or the full value. Examples: server for app_server, serial or pps for per_project_serial, req for on-request, and def for default. Ambiguous prefixes such as on are rejected and reprompted.
--defaults remains the stable scripted path, and the same setup knobs are available as flags: --workspace-root, --codex-command, --agent-mode, --dispatch-mode, --max-concurrent-agents, --max-turns, --max-automatic-retries, --approval-policy, and --initial-collaboration-mode.
Supported prompt-template variables are:
{{ issue.identifier }}{{ issue.title }}{{ issue.description }}{{ issue.state }}{{ project.id }}{{ project.name }}{{ project.description }}{{ phase }}{{ attempt }}
When a project has a description, Maestro's default implementation, review, and done prompts include it automatically under a Project context: section. Custom workflows can place {{ project.description }} wherever they want.
The default done prompt now focuses on merge-back, worktree cleanup, PR readiness, and blocker reporting instead of asking for a preview artifact.
The checked-in WORKFLOW.md is this repository's own workflow example. It is not guaranteed to match fresh workflow init defaults exactly.
Missing-file behavior differs by command:
maestro workflow initcreatesWORKFLOW.mdexplicitly;maestro initis a root-level alias for the same commandmaestro runbootstraps a missing file automaticallymaestro verifychecks readiness and returns remediation guidancemaestro doctorruns the same readiness checks with different presentationmaestro spec-checkis non-mutating and fails if the workflow file is missing or invalid
More Documentation
docs/OPERATIONS.md: runtime surfaces, HTTP endpoints, extension tools, logs, and operational detailsdocs/NPM_RELEASE.md: first npm prerelease bootstrap and the trusted-publishing release flowdocs/E2E_REAL_CODEX.md: end-to-end harness that runs the real Codex CLI against deterministic issuesWORKFLOW.md: the workflow configuration used by this repository
Contributor Setup
If you are contributing from a repo checkout, run the root install once:
pnpm installThat single install:
- installs the repo-managed Git hooks through Husky
- bootstraps the shared
pnpmworkspace acrossapps/frontendandapps/website - makes the root workspace scripts available for common local tasks
If you want shared cache hits across machines, Turborepo supports Remote Cache out of the box:
pnpm exec turbo login
pnpm exec turbo linkCommon contributor commands:
make build
make test
pnpm verify
pnpm run website:dev
pnpm run website:checkThe standard Go build/test path is now toolchain-light. The remaining
sqlite3 CLI usage lives in a few optional shell scripts under scripts/.
Repo-managed Git hooks stay targeted:
- staged Go changes run package-scoped Go tests
- staged frontend changes run frontend lint and tests
- staged website changes run Astro checks and website tests
- staged workspace and hook changes run the full
pnpm verifysuite pnpm verifyruns the Go build/test/coverage/race gates first, then the JS lint/test/check/smoke flow and npm packaging unit testpnpm run verify:pre-pushadds current-host npm packaging smoke, the shared retry stress test, and the full retry-safety harness on top ofpnpm verify- package-scoped root commands such as
pnpm run frontend:testandpnpm run website:buildnow go throughturbo --filter=...so they benefit from task caching too pre-pushnow runspnpm run verify:pre-push, keeping the Go/agent suite local before the web gates while GitHub Actions handles the lean web/package checks and nightly Codex E2E coverage
License
MIT
