@jaggerxtrm/specialists
v3.2.0
Published
OmniSpecialist — 7-tool MCP orchestration layer powered by the Specialist System. Discover and execute .specialist.yaml files across project/user/system scopes via pi.
Maintainers
Readme
Specialists
One MCP Server. Many Specialists. Real AI Agents.
Specialists is a Model Context Protocol (MCP) server that lets Claude discover and delegate to specialist agents — each a full autonomous coding agent powered by pi, scoped to a specific task.
Designed for agents, not users. Claude autonomously routes heavy tasks (code review, bug hunting, deep reasoning, session init) to the right specialist. In v3, specialists run as background CLI processes — zero polling overhead, notifications on completion.
How it works
┌──────────────────────────────────────────────┐
│ Claude Code │
│ │
│ MCP (control plane) CLI (execution plane) │
│ ───────────────────── ──────────────────── │
│ specialist_init specialists run \ │
│ list_specialists <name> --background│
│ use_specialist specialists result \ │
│ specialist_status <id> │
└──────────────────────────────────────────────┘
↓ file-based job state
.specialists/jobs/<id>/
status.json result.txt events.jsonlSpecialists are .specialist.yaml files discovered across two scopes:
| Scope | Location | Purpose |
|-------|----------|---------|
| project | ./specialists/ | Per-project specialists |
| user | ~/.agents/specialists/ | Built-in defaults (copied on install) + your own |
When a specialist runs, the server spawns a pi subprocess with the right model, tools, and system prompt injected. For background jobs, a Supervisor writes job state to disk — status, events, and final output — so Claude gets a one-shot notification on completion instead of polling.
Background Jobs (v3)
The primary workflow for long-running specialists:
# Start in background — returns immediately
specialists run overthinker --prompt "Refactor strategy?" --background
# → Job started: a1b2c3
# Check progress
specialists status
# → Active Jobs
# a1b2c3 overthinker running 1m12s tool: bash
# Stream events live
specialists feed --job a1b2c3 --follow
# Get result when done
specialists result a1b2c3
# Cancel
specialists stop a1b2c3When a background job completes, Claude's next prompt automatically receives a banner:
[Specialist 'overthinker' completed (job a1b2c3, 87s). Run: specialists result a1b2c3]Job files live in .specialists/jobs/<id>/ (gitignored by specialists init):
| File | Contents |
|------|---------|
| status.json | id, specialist, status, model, backend, pid, elapsed_s, bead_id, error |
| events.jsonl | thinking_start, toolcall_start, tool_execution_end, agent_end |
| result.txt | Final assistant output |
MCP Tools (8)
| Tool | Description |
|------|-------------|
| specialist_init | Session bootstrap: init beads if needed, return available specialists |
| list_specialists | Discover all available specialists across scopes |
| use_specialist | Run a specialist synchronously and return the result |
| specialist_status | Circuit breaker health + background job summary |
| start_specialist | (deprecated v3) Async job via in-memory registry — use CLI instead |
| poll_specialist | (deprecated v3) Poll in-memory job — use CLI instead |
| stop_specialist | (deprecated v3) Kill in-memory job — use specialists stop <id> |
| run_parallel | (deprecated v3) Concurrent in-memory jobs — use CLI --background |
For production use: use_specialist for short synchronous tasks, CLI --background for anything that takes more than a few seconds.
Built-in Specialists
| Specialist | Model | Purpose |
|-----------|-------|---------|
| init-session | Haiku | Analyse git state, recent commits, surface relevant context |
| codebase-explorer | Gemini Flash | Architecture analysis, directory structure, patterns |
| overthinker | Sonnet | 4-phase deep reasoning: analysis → critique → synthesis → output |
| parallel-review | Sonnet | Concurrent code review across multiple focus areas |
| bug-hunt | Sonnet | Autonomous bug investigation from symptoms to root cause |
| feature-design | Sonnet | Turn feature requests into structured implementation plans |
| auto-remediation | Gemini Flash | Apply fixes to identified issues automatically |
| report-generator | Haiku | Synthesise data/analysis results into structured markdown |
| test-runner | Haiku | Run tests, parse results, surface failures |
Permission Tiers
| Tier | pi tools | Use case |
|------|---------|----------|
| READ_ONLY | read, bash, grep, find, ls | Analysis, exploration |
| LOW | read, bash, edit, write, grep, find, ls | Code modifications |
| MEDIUM | read, bash, edit, write, grep, find, ls | Code modifications + git |
| HIGH | read, bash, edit, write, grep, find, ls | Full autonomy |
Permission is enforced at spawn time via pi --tools, not just in the system prompt.
Beads Integration
Specialists with write permissions automatically create a beads issue and close it on completion. Control this per-specialist:
beads_integration: auto # default — create for LOW/MEDIUM/HIGH
beads_integration: always # always create
beads_integration: never # never createThe bead_id is written to status.json so you can link issues for follow-up.
Installation
Recommended
npm install -g @jaggerxtrm/specialists
specialists installInstalls: pi (@mariozechner/pi-coding-agent), beads (@beads/bd), dolt, registers the specialists MCP at user scope, scaffolds ~/.agents/specialists/, copies built-in specialists, and installs five Claude Code hooks:
| Hook | Event | Enforces |
|------|-------|---------|
| specialists-main-guard.mjs | PreToolUse | No direct edits/commits on main/master |
| beads-edit-gate.mjs | PreToolUse | No file edits without an in_progress beads issue |
| beads-commit-gate.mjs | PreToolUse | No git commit while issues are in_progress |
| beads-stop-gate.mjs | Stop | Agent cannot stop with unresolved issues |
| specialists-complete.mjs | UserPromptSubmit | Injects background job completion banners |
After running, restart Claude Code to load the MCP. Re-run specialists install at any time to update or repair.
One-time (no global install)
npx --package=@jaggerxtrm/specialists installWriting a Specialist
Create a .yaml file in ./specialists/ (project scope) or ~/.agents/specialists/ (user scope):
specialist:
metadata:
name: my-specialist
version: 1.0.0
description: "What this specialist does."
category: analysis
tags: [analysis, example]
updated: "2026-03-11"
execution:
mode: tool
model: anthropic/claude-haiku-4-5
fallback_model: google-gemini-cli/gemini-3-flash-preview
timeout_ms: 120000
response_format: markdown
permission_required: READ_ONLY
prompt:
system: |
You are a specialist that does X.
Produce a structured markdown report.
task_template: |
$prompt
# Inject a single skill file into the system prompt
skill_inherit: ~/.agents/skills/my-domain-knowledge.md
communication:
output_to: .specialists/my-specialist-result.md # optional file sink
skills:
# Run scripts before/after the specialist
scripts:
- path: ./scripts/health-check.sh
phase: pre # runs before the task prompt
inject_output: true # output available as $pre_script_output
- path: ./scripts/cleanup.sh
phase: post
# Inject multiple skill/context files into the system prompt (v3)
paths:
- ~/skills/domain-context.md
- ./specialists/shared/conventions.mdModel IDs use the full provider/model format: anthropic/claude-sonnet-4-6, google-gemini-cli/gemini-3-flash-preview, anthropic/claude-haiku-4-5.
CLI
| Command | Description |
|---------|-------------|
| specialists install | Full-stack installer: pi, beads, dolt, MCP, hooks |
| specialists init | Scaffold ./specialists/, .specialists/, update .gitignore, inject AGENTS.md block |
| specialists list | List discovered specialists with model, description, scope |
| specialists models | List models available on pi with capability flags |
| specialists edit <name> --<field> <value> | Edit a specialist field in-place |
| specialists run <name> | Run a specialist (foreground by default) |
| specialists run <name> --background | Start as background job, print job ID |
| specialists result <id> | Print result of a completed background job |
| specialists feed --job <id> [--follow] | Tail events.jsonl; --follow streams live |
| specialists stop <id> | Send SIGTERM to a running background job |
| specialists status | System health + active background jobs |
| specialists version | Print installed version |
| specialists help | Show command reference |
specialists run
# Foreground — streams output to stdout
specialists run init-session --prompt "What changed recently?"
# Background — returns job ID immediately
specialists run overthinker --prompt "Refactor?" --background
# Background with model override, no beads
specialists run bug-hunt --prompt "TypeError in auth" --background \
--model anthropic/claude-sonnet-4-6 --no-beads
# Pipe from stdin
echo "Analyse the architecture" | specialists run codebase-explorerspecialists status
specialists status
── Specialists ───────────────────────────
✓ 9 found (9 project)
── pi (coding agent runtime) ────────────
✓ v0.57.1 — 4 providers active (anthropic, google-gemini-cli, qwen, zai)
── beads (issue tracker) ────────────────
✓ bd installed v0.59.0
✓ .beads/ present in project
── MCP ───────────────────────────────────
✓ specialists binary installed /usr/local/bin/specialists
── Active Jobs ───────────────────────────
a1b2c3 overthinker running 1m12s tool: bash
g7h8i9 init-session done 0m08sDevelopment
git clone https://github.com/Jaggerxtrm/specialists.git
cd specialists
bun install
bun run build # bun build src/index.ts --target=node --outfile=dist/index.js
bun test # bun --bun vitest runSee CLAUDE.md for the full architecture guide.
License
MIT
