aidevops
v2.170.2
Published
AI DevOps Framework - AI-assisted development workflows, code quality, and deployment automation
Maintainers
Readme
AI DevOps Framework
aidevops.sh — An AI operations platform for launching and managing development, business, marketing, and creative projects. 11 specialist AI agents handle the automatable work across every domain so your time is preserved for real-world discovery and decisions that AI cannot yet reach.
"Scope a mission to redesign the landing pages — break it into milestones, dispatch workers in parallel, validate each milestone, and track budget across the whole project" - One conversation, autonomous multi-day project execution.
The Philosophy
Maximum value for your time and money. aidevops is built on these principles:
- Autonomous orchestration - An AI supervisor runs every 2 minutes, dispatching parallel workers, merging PRs, detecting stuck processes, and advancing multi-day missions — no human babysitting required
- Multi-domain agents - 11 specialist agents (code, SEO, marketing, content, legal, sales, research, video, business, accounts, health) with 780+ subagents loaded on demand
- Multi-model safety - High-stakes operations (force push, production deploy, data migration) are verified by a second cross-provider model before execution — different providers have different failure modes, so correlated hallucinations are rare
- Resource efficiency - Cost-aware model routing (local → haiku → flash → sonnet → pro → opus), project-type bundles that auto-configure quality gates and model tiers, budget tracking with burn-rate analysis
- Self-healing - When something breaks, diagnose the root cause, create tasks, and fix it. Every error is a live test case for a permanent solution
- Self-improving - When patterns of failure or inefficiency emerge, improve the framework itself. Session mining extracts learnings from past sessions automatically
- Gap awareness - Every session is an opportunity to identify what's missing — gaps in automation, documentation, coverage, or processes — and create tasks to fill them
- Git-first workflow - Protected branches, PR reviews, quality gates before merge. Sane vibe-coding through structure
- Parallel agents - Multiple AI sessions running full Ralph loops on separate branches via git worktrees
- Progressive discovery -
/slashcommands and@subagentmentions load knowledge into context only when needed - Open-source ready - Contribute to any project the same way you work on your own. Clone a repo, develop solutions to issues locally, and submit pull requests — the same full-loop workflow works everywhere
The result: an AI operations platform that manages projects across every business domain — absorbing everything automatable so you can focus on what matters.
Built on proven patterns: aidevops implements industry-standard agent design patterns - including multi-layer action spaces, context isolation, and iterative execution loops.
Why This Framework?
Beyond single-task AI. Most AI coding tools handle one conversation, one repo, one task at a time. aidevops manages your entire operation — dispatching parallel AI agents across multiple repos, routing tasks to domain-specialist agents, and running autonomously for days on multi-milestone projects.
What makes it different:
- Autonomous supervisor - AI pulse runs every 2 minutes: merges ready PRs, dispatches workers, kills stuck processes, advances missions, triages quality findings — no human in the loop
- Cross-domain intelligence - 11 agents spanning code, business, marketing, legal, sales, content, video, research, SEO, health, and accounts — each with domain expertise and specialist subagents
- Multi-model safety - Destructive operations verified by a second AI model from a different provider before execution
- 30+ service integrations - Hosting, Git platforms, DNS, security, monitoring, deployment, payments, communications
- Mission orchestration - Multi-day autonomous projects broken into milestones with validation, budget tracking, and automatic advancement
Quick Reference
- Purpose: AI-assisted DevOps automation framework
- Install:
npm install -g aidevops && aidevops update - Entry:
aidevopsCLI,~/.aidevops/agents/AGENTS.md - Stack: Bash scripts, TypeScript (Bun), MCP servers
Key Commands
aidevops init- Initialize in any projectaidevops update- Update frameworkaidevops auto-update- Automatic update polling (enable/disable/status)aidevops secret- Manage secrets (gopass encrypted, AI-safe)/onboarding- Interactive setup wizard (in AI assistant)
Agent Structure
- 11 primary agents (Build+, SEO, Marketing, etc.) with specialist @subagents on demand
- 780+ subagent markdown files organized by domain
- 290+ helper scripts in
.agents/scripts/ - 58 slash commands for common workflows
Enterprise-Grade Quality & Security
Comprehensive DevOps framework with tried & tested services integrations, popular and trusted MCP servers, and enterprise-grade infrastructure quality assurance code monitoring and recommendations.
Security Notice
This framework provides agentic AI assistants with powerful infrastructure access. Use responsibly.
Capabilities: Execute commands, access credentials, modify infrastructure, interact with APIs Your responsibility: Use trusted AI providers, rotate credentials regularly, monitor activity
Quick Start
Installation Options
npm (recommended - verified provenance):
npm install -g aidevops && aidevops updateNote: npm suppresses postinstall output. The
&& aidevops updatedeploys agents to~/.aidevops/agents/. The CLI will remind you if agents need updating.
Bun (fast alternative):
bun install -g aidevops && aidevops updateHomebrew (macOS/Linux):
brew install marcusquinn/tap/aidevops && aidevops updateDirect from source (aidevops.sh):
bash <(curl -fsSL https://aidevops.sh/install)Manual (git clone):
git clone https://github.com/marcusquinn/aidevops.git ~/Git/aidevops
~/Git/aidevops/setup.shThat's it! The setup script will:
- Clone/update the repo to
~/Git/aidevops - Deploy agents to
~/.aidevops/agents/ - Install the
aidevopsCLI command - Configure your AI assistants automatically
- Offer to install Oh My Zsh (optional, opt-in) for enhanced shell experience
- Guide you through recommended tools (Tabby, Zed, Git CLIs)
- Ensure all PATH and alias changes work in both bash and zsh
New users: Start OpenCode and type /onboarding to configure your services interactively. OpenCode is the recommended tool for aidevops - all features, agents, and workflows are designed and tested for it first. The onboarding wizard will:
- Explain what aidevops can do
- Ask about your work to give personalized recommendations
- Show which services are configured vs need setup
- Guide you through setting up each service with links and commands
After installation, use the CLI:
aidevops status # Check what's installed
aidevops update # Update framework + check registered projects
aidevops auto-update # Manage automatic update polling (every 10 min)
aidevops init # Initialize aidevops in any project
aidevops features # List available features
aidevops repos # List/add/remove registered projects
aidevops detect # Scan for unregistered aidevops projects
aidevops upgrade-planning # Upgrade TODO.md/PLANS.md to latest templates
aidevops update-tools # Check and update installed tools
aidevops uninstall # Remove aidevopsProject tracking: When you run aidevops init, the project is automatically registered in ~/.config/aidevops/repos.json. Running aidevops update checks all registered projects for version updates.
Use aidevops in Any Project
Initialize aidevops features in any git repository:
cd ~/your-project
aidevops init # Enable all features
aidevops init planning # Enable only planning
aidevops init planning,time-tracking # Enable specific featuresThis creates:
.aidevops.json- Configuration with enabled features.agentssymlink →~/.aidevops/agents/TODO.md- Quick task tracking with time estimatestodo/PLANS.md- Complex execution plans.beads/- Task graph database (if beads enabled)
Available features: planning, git-workflow, code-quality, time-tracking, beads
Upgrade Planning Files
When aidevops templates evolve, upgrade existing projects to the latest format:
aidevops upgrade-planning # Interactive upgrade with backup
aidevops upgrade-planning --dry-run # Preview changes without modifying
aidevops upgrade-planning --force # Skip confirmation promptThis preserves your existing tasks while adding TOON-enhanced parsing, dependency tracking, and better structure.
Automatic detection: aidevops update now scans all registered projects for outdated planning templates (comparing TOON meta version numbers) and offers to upgrade them in-place with backups.
Task Graph Visualization with Beads
Beads provides task dependency tracking and graph visualization:
aidevops init beads # Enable beads (includes planning)Task Dependencies:
- [ ] t001 First task
- [ ] t002 Second task blocked-by:t001
- [ ] t001.1 Subtask of t001| Syntax | Meaning |
|--------|---------|
| blocked-by:t001 | Task waits for t001 to complete |
| blocks:t002 | This task blocks t002 |
| t001.1 | Subtask of t001 (hierarchical) |
Commands:
| Command | Purpose |
|---------|---------|
| /ready | Show tasks with no open blockers |
| /list-verify | List verification queue (pending, passed, failed) |
| /sync-beads | Sync TODO.md/PLANS.md with Beads graph |
| bd list | List all tasks in Beads |
| bd ready | Show ready tasks (Beads CLI) |
| bd graph <id> | Show dependency graph for an issue |
Architecture: aidevops markdown files (TODO.md, PLANS.md) are the source of truth. Beads syncs from them for visualization.
Optional Viewers: Beyond the bd CLI, there are community viewers for richer visualization:
beads_viewer(Python TUI) - PageRank, critical path analysisbeads-ui(Web) - Live updates in browserbdui(React/Ink TUI) - Modern terminal UIperles(Rust TUI) - BQL query language
See .agents/tools/task-management/beads.md for complete documentation and installation commands.
Your AI assistant now has agentic access to 30+ service integrations.
OpenCode Anthropic OAuth (Built-in)
OpenCode v1.1.36+ includes Anthropic OAuth authentication natively. No external plugin is needed.
After setup, authenticate:
opencode auth login
# Select: Anthropic → Claude Pro/Max
# Follow OAuth flow in browserBenefits:
- Zero cost for Claude Pro/Max subscribers (covered by subscription)
- Automatic token refresh - No manual re-authentication needed
- Beta features enabled - Extended thinking modes and latest features
GitHub AI Agent Integration
Enable AI-powered issue resolution directly from GitHub. Comment /oc fix this on any issue and the AI creates a branch, implements the fix, and opens a PR.
Security-first design - The workflow includes:
- Trusted users only (OWNER/MEMBER/COLLABORATOR)
ai-approvedlabel required on issues before AI processing- Prompt injection pattern detection
- Audit logging of all invocations
- 15-minute timeout and rate limiting
Quick setup:
# 1. Install the OpenCode GitHub App
# Visit: https://github.com/apps/opencode-agent
# 2. Add API key secret
# Repository → Settings → Secrets → ANTHROPIC_API_KEY
# 3. Create required labels
gh label create "ai-approved" --color "0E8A16" --description "Issue approved for AI agent"
gh label create "security-review" --color "D93F0B" --description "Requires security review"The secure workflow is included at .github/workflows/opencode-agent.yml.
Usage:
| Context | Command | Result |
|---------|---------|--------|
| Issue (with ai-approved label) | /oc fix this | Creates branch + PR |
| Issue | /oc explain this | AI analyzes and replies |
| PR | /oc review this PR | Code review feedback |
| PR Files tab | /oc add error handling here | Line-specific fix |
See .agents/tools/git/opencode-github-security.md for the full security documentation.
Supported AI Assistant: OpenCode is the only tested and supported AI coding tool for aidevops. All features, agents, and workflows are designed and tested for OpenCode first. The claude-code CLI is used as a companion tool called from within OpenCode.
Recommended:
- OpenCode - The recommended AI coding agent. Powerful agentic TUI/CLI with native MCP support, Tab-based agent switching, LSP integration, plugin ecosystem, and excellent DX. All aidevops features are designed and tested for OpenCode first.
- Tabby - Recommended terminal. Colour-coded Profiles per project/repo, auto-syncs tab title with git repo/branch.
- Zed - Recommended editor. High-performance with AI integration (use with the OpenCode Agent Extension).
Terminal Tab Title Sync
Your terminal tab/window title automatically shows repo/branch context when working in git repositories. This helps identify which codebase and branch you're working on across multiple terminal sessions.
Supported terminals: Tabby, iTerm2, Windows Terminal, Kitty, Alacritty, WezTerm, Hyper, and most xterm-compatible terminals.
How it works: The pre-edit-check.sh script's primary role is enforcing git workflow protection (blocking edits on main/master branches). As a secondary, non-blocking action, it updates the terminal title via escape sequences. No configuration needed - it's automatic.
Example format: {repo}/{branch-type}/{description}
See .agents/tools/terminal/terminal-title.md for customization options.
Companion tool:
- claude-code CLI - Called from within OpenCode for sub-tasks and headless dispatch
Collaborator compatibility: Projects initialized with aidevops init include pointer files (.cursorrules, .windsurfrules, etc.) that reference AGENTS.md, helping collaborators using other editors find project context. aidevops does not install into or configure those tools.
Repo courtesy files: aidevops init scaffolds standard repo files if they don't exist: README.md, LICENCE (MIT), CHANGELOG.md, CONTRIBUTING.md, SECURITY.md, CODE_OF_CONDUCT.md. Author name and email are auto-detected from git config. Existing files are never overwritten.
Core Capabilities
AI-First Infrastructure Management:
- SSH server access, remote command execution, API integrations
- DNS management, application deployment, email monitoring
- Git platform management, domain purchasing, setup automation
- WordPress management, credential security, code auditing
Autonomous Orchestration:
- Pulse supervisor - Autonomous AI supervisor runs every 2 minutes via launchd — merges ready PRs, dispatches workers, kills stuck processes, detects orphaned PRs, syncs TODO state with GitHub, triages quality findings, and advances missions. No human in the loop
- Missions - Multi-day autonomous projects:
/missionscopes a high-level goal into milestones and features. The pulse dispatches workers, validates milestones, tracks budget, and advances through the project automatically (mission-dashboard-helper.sh) - Multi-model verification - Destructive operations (force push, production deploy, data migration) are verified by a second AI model from a different provider before execution. Different providers have different failure modes, so correlated hallucinations are rare
- Supervisor - SQLite state machine dispatches tasks to parallel AI agents with retry cycles, batch management, and cron scheduling
- Runners - Named headless agent instances with persistent identity, instructions, and memory namespaces
/runnerscommand - Batch dispatch from task IDs, PR URLs, or descriptions with concurrency control and progress monitoring- Mailbox - SQLite-backed inter-agent messaging for coordination across parallel sessions
- Worktree isolation - Each agent works on its own branch in a separate directory, no merge conflicts
- Budget tracking - Append-only cost log (
budget-tracker-helper.sh) with burn-rate analysis and/budget-analysiscommand for model routing decisions - Observability - LLM request capture plugin (
observability.mjs) for cost tracking, performance analysis, and debugging - Rate limits - Per-provider rate limit configuration (
rate-limits.json) with throttle-risk warnings
Project Intelligence:
- Bundles - Project-type presets that auto-configure model tiers, quality gates, and agent routing per repo. 6 built-in bundles (web-app, library, cli-tool, content-site, infrastructure, agent) with auto-detection from marker files (
bundle-helper.sh) - TTSR rules - Soft rule engine (
ttsr-rule-loader.sh) with.agents/rules/directory for AI output correction (e.g., no-edit-on-main, no-glob-for-discovery) - Cross-review -
/cross-reviewdispatches the same prompt to multiple AI models in parallel, diffs results, and optionally auto-scores via a judge model - Local models - Run AI models locally via llama.cpp for free, private, offline inference (
local-model-helper.sh) with HuggingFace GGUF model management - Tech stack lookup -
/tech-stackdetects technology stacks of URLs or finds sites using specific technologies (Wappalyzer, httpx, nuclei, BuiltWith) - IP reputation -
ip-reputation-helper.shchecks IP addresses against multiple reputation databases (Spamhaus, ProxyCheck, AbuseIPDB) before VPS purchase or deployment
Conversational Memory & Entity System:
- Entity memory - Cross-channel relationship continuity (
entity-helper.sh): people, agents, and services tracked across Matrix, SimpleX, email, and CLI with versioned profiles - Conversational memory - Per-conversation context management (
conversation-helper.sh): idle detection, immutable summaries, tone profile extraction - Three-layer architecture - Layer 0 (immutable raw log), Layer 1 (tactical summaries), Layer 2 (strategic entity profiles) in shared SQLite
Communications:
- SimpleX bot - Channel-agnostic gateway with SimpleX Chat as first adapter for AI agent dispatch (
simplex-bot/) - Matterbridge - Multi-platform chat bridge connecting 20+ platforms including Matrix, Discord, Telegram, Slack, IRC, WhatsApp, XMPP (
matterbridge-helper.sh) - Localdev - Local development environment manager with dnsmasq, Traefik, mkcert for production-like
.localdomains with HTTPS (localdev-helper.sh)
MCP Toolkit:
- MCPorter - Discover, call, compose, and generate CLIs/typed clients for MCP servers (
mcporternpm package) - OpenAPI Search - Search and explore any OpenAPI specification via MCP (zero install, Cloudflare Worker)
- Cloudflare Code Mode - Full Cloudflare API (2,500+ endpoints) via 2 tools in ~1,000 tokens
Unified Interface:
- Standardized commands across all providers
- Automated SSH configuration and multi-account support for all services
- Security-first design with comprehensive logging, code quality reviews, and continual feedback-based improvement
Quality Control & Monitoring:
- Multi-Platform Analysis: SonarCloud, CodeFactor, Codacy, CodeRabbit, Qlty, Gemini Code Assist, Snyk
- Performance Auditing: PageSpeed Insights, Lighthouse, WebPageTest, Core Web Vitals (
/performancecommand) - SEO Toolchain: 13 SEO subagents including Semrush, Ahrefs, ContentKing, Screaming Frog, Bing Webmaster Tools, Rich Results Test, programmatic SEO, analytics tracking, schema validation
- SEO Debugging: Open Graph validation, favicon checker, social preview testing
- Email Deliverability: SPF/DKIM/DMARC/MX validation, blacklist checking
- Uptime Monitoring: Updown.io integration for website and SSL monitoring
Imported Skills
aidevops includes curated skills imported from external sources. Skills support automatic update tracking:
| Skill | Source | Description | |-------|--------|-------------| | cloudflare-platform | dmmulroy/cloudflare-skill | 60 Cloudflare products: Workers, Pages, D1, R2, KV, Durable Objects, AI, networking, security | | heygen | heygen-com/skills | AI avatar video creation API: avatars, voices, video generation, streaming, webhooks | | remotion | remotion-dev/skills | Programmatic video creation with React, animations, rendering | | video-prompt-design | snubroot/Veo-3-Meta-Framework | AI video prompt engineering - 7-component meta prompt framework for Veo 3 | | animejs | animejs.com | JavaScript animation library patterns and API (via Context7) | | caldav-calendar | ClawdHub | CalDAV calendar sync via vdirsyncer + khal (iCloud, Google, Fastmail, Nextcloud) | | proxmox-full | ClawdHub | Complete Proxmox VE hypervisor management via REST API |
CLI Commands:
aidevops skill add <owner/repo> # Import a skill from GitHub
aidevops skill add clawdhub:<slug> # Import a skill from ClawdHub
aidevops skill list # List imported skills
aidevops skill check # Check for upstream updates
aidevops skill update [name] # Update specific or all skills
aidevops skill scan [name] # Security scan skills (Cisco Skill Scanner)
aidevops skill remove <name> # Remove an imported skillSkills are registered in ~/.aidevops/agents/configs/skill-sources.json with upstream tracking for update detection.
Security Scanning:
Imported skills are automatically security-scanned using Cisco Skill Scanner when installed. Scanning runs on both initial import and updates -- pulling a new version of a skill triggers the same security checks as the first import. CRITICAL/HIGH findings block the operation; MEDIUM/LOW findings warn but allow. Telemetry is disabled - no data is sent to third parties.
When a VirusTotal API key is configured (aidevops secret set VIRUSTOTAL_MARCUSQUINN), an advisory second layer scans file hashes against 70+ AV engines and checks domains/URLs referenced in skill content. VT scans are non-blocking -- the Cisco scanner remains the security gate.
| Scenario | Security scan runs? | CRITICAL/HIGH blocks? |
|----------|--------------------|-----------------------|
| aidevops skill add <source> | Yes | Yes |
| aidevops skill update [name] | Yes | Yes |
| aidevops skill add <source> --force | Yes | Yes |
| aidevops skill add <source> --skip-security | Yes (reports only) | No (warns) |
| aidevops skill scan [name] | Yes (standalone) | Report only |
The --force flag only controls file overwrite behavior (replacing an existing skill without prompting). To bypass security blocking, use --skip-security explicitly -- this separation ensures that routine updates and re-imports never silently skip security checks.
Scan results are logged to .agents/SKILL-SCAN-RESULTS.md automatically on each batch scan and skill import, providing a transparent audit trail of security posture over time.
Browse community skills: skills.sh | ClawdHub | Specification: agentskills.io
Reference:
- Agent Skills Specification - The open format for SKILL.md files
- skills.sh Leaderboard - Discover popular community skills
- ClawdHub - Skill registry with vector search (OpenClaw ecosystem)
- vercel-labs/add-skill - The upstream CLI tool (aidevops uses its own implementation)
- anthropics/skills - Official Anthropic example skills
- agentskills/agentskills - Specification source and reference library
Agent Sources (Private Repos)
Sync agents from private Git repositories into the framework. Private repos keep their own agents, helper scripts, and slash commands — aidevops sources sync deploys them alongside the core agents.
aidevops sources add ~/Git/my-private-agents # Register a local repo
aidevops sources add-remote [email protected]:u/r.git # Clone and register a remote repo
aidevops sources list # List configured sources
aidevops sources sync # Sync all sources
aidevops sources remove my-private-agents # Remove a sourceHow it works: Private repos contain a .agents/ directory with agent subdirectories. Agents with mode: primary in their frontmatter are symlinked to the agents root for auto-discovery as primary agent tabs. Markdown files with agent: frontmatter are deployed as /slash commands. All sources sync automatically during aidevops update.
Reference: .agents/aidevops/agent-sources.md
Agent Design Patterns
aidevops implements proven agent design patterns identified by Lance Martin (LangChain).
| Pattern | Description | aidevops Implementation |
|---------|-------------|------------------------|
| Give Agents a Computer | Filesystem + shell for persistent context | ~/.aidevops/.agent-workspace/, 290+ helper scripts |
| Multi-Layer Action Space | Few tools, push actions to computer | Per-agent MCP filtering (~12-20 tools each) |
| Progressive Disclosure | Load context on-demand | Subagent routing with content summaries, YAML frontmatter, read-on-demand |
| Offload Context | Write results to filesystem | .agent-workspace/work/[project]/ for persistence |
| Cache Context | Prompt caching for cost | Stable instruction prefixes |
| Isolate Context | Sub-agents with separate windows | Subagent files with specific tool permissions |
| Multi-Agent Orchestration | Coordinate parallel agents | TOON mailbox, agent registry, supervisor dispatch |
| Compaction Resilience | Preserve context across compaction | OpenCode plugin injects dynamic state at compaction time |
| Ralph Loop | Iterative execution until complete | /full-loop, full-loop-helper.sh |
| Evolve Context | Learn from sessions | /remember, /recall with SQLite FTS5 + opt-in semantic search |
| Pattern Tracking | Learn what works/fails | /patterns command, memory system |
| Cost-Aware Routing | Match model to task complexity | model-routing.md with 5-tier guidance, /route command |
| Model Comparison | Compare models side-by-side | /compare-models (live data), /compare-models-free (offline) |
| Response Scoring | Evaluate actual model outputs | /score-responses with structured criteria |
Key insight: Context is a finite resource with diminishing returns. aidevops treats every token as precious - loading only what's needed, when it's needed.
See .agents/aidevops/architecture.md for detailed implementation notes and references.
Multi-Agent Orchestration
Run multiple AI agents in parallel on separate branches, coordinated through a lightweight mailbox system. Each agent works independently in its own git worktree while the supervisor manages task distribution and status reporting.
Architecture:
Supervisor (pulse loop)
├── Agent Registry (TOON format - who's active, what branch, idle/busy)
├── Mailbox System (SQLite WAL-mode, indexed queries)
│ ├── task_assignment → worker inbox
│ ├── status_report → coordinator outbox
│ └── broadcast → all agents
└── Model Routing (tier-based: haiku/sonnet/opus/flash/pro)Key components:
| Component | Script | Purpose |
|-----------|--------|---------|
| Mailbox | mail-helper.sh | SQLite-backed inter-agent messaging (send, check, broadcast, archive) |
| Supervisor | supervisor-helper.sh | Autonomous multi-task orchestration with SQLite state machine, batches, retry cycles, cron scheduling, auto-pickup from TODO.md |
| Registry | mail-helper.sh register | Agent registration with role, branch, worktree, heartbeat |
| Model routing | model-routing.md, /route | Cost-aware 5-tier routing guidance (haiku/flash/sonnet/pro/opus) |
| Budget tracking | budget-tracker-helper.sh | Append-only cost log for model routing decisions |
| Observability | observability.mjs plugin | LLM request capture for cost tracking and performance analysis |
How it works:
- Each agent registers on startup (
mail-helper.sh register --role worker) - Supervisor runs periodic pulses (
supervisor-helper.sh pulse) - Pulse collects status reports, dispatches queued tasks to idle workers
- Agents send completion reports back via mailbox
- SQLite WAL mode +
busy_timeouthandles concurrent access (79x faster than previous file-based system)
Compaction plugin (.agents/plugins/opencode-aidevops/): When OpenCode compacts context (at ~200K tokens), the plugin injects current session state - agent registry, pending mailbox messages, git context, and relevant memories - ensuring continuity across compaction boundaries.
Custom system prompt (.agents/prompts/build.txt): Based on upstream OpenCode with aidevops-specific overrides for tool preferences, professional objectivity, and per-model reinforcements for weaker models.
Subagent index (.agents/subagent-index.toon): Compressed TOON routing table listing all agents, subagents, workflows, and scripts with model tier assignments - enables fast agent discovery without loading full markdown files.
Autonomous Orchestration & Parallel Agents
Why this matters: Long-running tasks -- batch PR reviews, multi-site audits, large refactors, multi-day feature projects -- are where AI agents deliver the most value. Instead of babysitting one task at a time, the supervisor dispatches work to parallel agents, each in its own git worktree, with automatic retry, progress tracking, and batch completion reporting.
Pulse Supervisor: Autonomous AI Operations
The pulse is the heartbeat of aidevops — an autonomous AI supervisor that runs every 2 minutes via launchd. There is no human at the terminal. It manages the entire development pipeline across all repos registered with pulse: true.
What it does each cycle:
| Phase | Action |
|-------|--------|
| Capacity check | Circuit breaker, dynamic worker slots calculated from available RAM |
| Merge ready PRs | Green CI + no blocking reviews → squash merge (free — no worker slot needed) |
| Fix failing PRs | Dispatch a worker to fix CI failures or address review feedback |
| Detect stuck work | PRs open 6+ hours with no activity → flag or close and re-file |
| Dispatch workers | Route open issues to available worker slots, respecting priority and blocked-by: dependencies |
| Advance missions | Check active multi-day missions, dispatch features, validate milestones, track budget |
| Triage quality | Read daily quality sweep findings (ShellCheck, SonarCloud, Codacy, CodeRabbit), create issues for actionable findings |
| Sync TODOs | Create GitHub issues for unsynced TODO entries, commit ref changes |
| Kill stuck workers | Workers running 3+ hours with no PR are killed to free slots |
| Detect orphaned PRs | Open PRs with no active worker and no activity for 6+ hours are flagged for re-dispatch |
Operational intelligence:
- Struggle-ratio — computes
messages / max(1, commits)for each active worker. High ratio (>30) with >30 min elapsed and zero commits flags the worker as "struggling". Ratio >50 after 1 hour flags "thrashing". Informational signal — the supervisor LLM decides the action (kill, wait, re-dispatch with more context) - Circuit breaker — prevents cascading failures by tracking success/failure rates and tripping when error rate exceeds threshold
- Dynamic concurrency — worker slot count adapts to available RAM, not a hardcoded constant
- Stale assignment recovery — tasks assigned to workers that died (no active process, no PR, 3+ hours stale) are automatically unassigned and made available for re-dispatch
- Priority ordering — green PRs (free merge) > failing PRs (closer to done) > high-priority/bug issues > active mission features > product repos > smaller tasks > oldest
The pulse is an LLM, not a script. It reads issue bodies, assesses context, and uses judgment. When it encounters something unexpected — an issue body that says "completed", a task with no clear description, a label that doesn't match reality — it handles it the way a competent human manager would.
# Pulse runs automatically via launchd (every 2 minutes)
# Manual trigger:
opencode run "/pulse"See: .agents/scripts/commands/pulse.md for the full supervisor specification.
Missions: Multi-Day Autonomous Projects
Missions are the highest-level orchestration primitive — autonomous multi-day projects that break a high-level goal into milestones, features, and validation criteria. The pulse supervisor advances them automatically.
# Scope a mission interactively
/mission "Redesign the landing pages for mobile-first with A/B testing"How missions work:
/missionscopes the goal into milestones with features and acceptance criteria- Each feature becomes a TODO entry tagged
mission:mNNNwith a GitHub issue - The pulse dispatches features as regular workers (respecting
MAX_WORKERS) - When all features in a milestone complete, the pulse dispatches a validation worker to verify integration
- Passed milestones advance automatically — the next milestone's features are dispatched
- Budget tracking pauses the mission if any category exceeds the alert threshold (default 80%)
Two execution modes:
| Mode | Workflow | Best for | |------|----------|----------| | Full | Worktree + PR per feature, standard review flow | Production code, collaborative projects | | POC | Direct commits, skip ceremony | Prototypes, experiments, proof-of-concept |
Mission state is tracked in a JSON file committed to the repo. Each pulse cycle reads the state, acts on it, and commits updates — so any session (or the next pulse) can pick up where the last one left off.
See: .agents/workflows/mission-orchestrator.md for the full orchestrator specification, .agents/scripts/commands/dashboard.md for the mission progress dashboard.
Multi-Model Verification: Cross-Provider Safety
High-stakes operations are verified by a second AI model from a different provider before execution. This catches single-model hallucinations before destructive operations cause irreversible damage.
When verification triggers:
| Risk Level | Examples | Action |
|------------|----------|--------|
| Critical | git push --force to main, DROP DATABASE, production deploy | Blocked unless second model agrees |
| High | Force push to feature branch, data migration, secret exposure | Warned, verification recommended |
| Medium | Bulk file deletion, config changes | Logged |
| Low | Normal edits, test runs | No verification |
How it works:
pre-edit-check.shscreens operations against the high-stakes taxonomy- For critical/high operations,
verify-operation-helper.shsends the operation context to a second model (different provider than the primary) - The verifier independently assesses whether the operation is safe
- On disagreement, the operation is blocked (critical) or warned (high)
- All verification decisions are logged for audit
Why cross-provider? Same-provider models share training data and failure modes. A Claude hallucination is unlikely to be reproduced by Gemini or GPT, and vice versa. The verification uses the cheapest model tier (haiku-equivalent) — cost is minimal per check.
Configuration: Per-repo via .agents/reference/high-stakes-operations.md. Opt-out with VERIFY_ENABLED=false (not recommended).
See: .agents/tools/verification/parallel-verify.md for the verification agent specification.
Project Bundles: Auto-Configuration
Bundles are project-type presets that auto-configure model tiers, quality gates, and agent routing per repo. Instead of manually configuring each project, bundles detect what kind of project you're working on and apply sensible defaults.
Built-in bundles:
| Bundle | Auto-detected by | Model default | Quality gates | Agent routing |
|--------|-----------------|---------------|---------------|---------------|
| web-app | package.json + framework markers | sonnet | Full (lint, test, build, a11y) | Build+ default |
| library | package.json with main/exports | sonnet | Full + API docs check | Build+ default |
| cli-tool | bin field in package.json | sonnet | ShellCheck, test | Build+ default |
| content-site | CMS markers, wp-config.php | haiku | Lighthouse, SEO | Marketing for content tasks |
| infrastructure | Dockerfile, terraform/, ansible/ | sonnet | ShellCheck, security scan | Build+ default |
| agent | AGENTS.md, .agents/ | opus | Agent review, prompt quality | Build+ default |
Resolution priority: Explicit bundle field in repos.json > .aidevops.json project config > auto-detection from marker files.
CLI:
bundle-helper.sh detect <repo-path> # Auto-detect bundle type
bundle-helper.sh resolve <repo-path> # Show resolved config (with overrides)
bundle-helper.sh show <bundle-name> # Show bundle defaults
bundle-helper.sh list # List all available bundlesSee: .agents/bundles/ for bundle definitions, .agents/scripts/bundle-helper.sh for the CLI.
Parallel Agents & Headless Dispatch
Run multiple AI sessions concurrently with isolated contexts. Named runners provide persistent agent identities with their own instructions and memory.
| Feature | Description |
|---------|-------------|
| Headless dispatch | opencode run for one-shot tasks, opencode serve + --attach for warm server |
| Runners | Named agent instances with per-runner AGENTS.md, config, and run logs (runner-helper.sh) |
| Session management | Resume sessions with -s <id> or -c, fork with SDK |
| Memory namespaces | Per-runner memory isolation with shared access when needed |
| SDK orchestration | @opencode-ai/sdk for TypeScript parallel dispatch via Promise.all |
| Matrix integration | Chat-triggered dispatch via self-hosted Matrix (optional) |
# Create a named runner
runner-helper.sh create code-reviewer --description "Reviews code for security and quality"
# Dispatch a task (one-shot)
runner-helper.sh run code-reviewer "Review src/auth/ for vulnerabilities"
# Dispatch against warm server (faster, no MCP cold boot)
opencode serve --port 4096 &
runner-helper.sh run code-reviewer "Review src/auth/" --attach http://localhost:4096
# Parallel dispatch via CLI
opencode run --attach http://localhost:4096 --title "Review" "Review src/auth/" &
opencode run --attach http://localhost:4096 --title "Tests" "Generate tests for src/utils/" &
wait
# List runners and status
runner-helper.sh list
runner-helper.sh status code-reviewerArchitecture:
OpenCode Server (opencode serve)
├── Session 1 (runner/code-reviewer)
├── Session 2 (runner/seo-analyst)
└── Session 3 (scheduled-task)
↑
HTTP API / SSE Events
↑
┌────────┴────────┐
│ Dispatch Layer │ ← runner-helper.sh, cron, Matrix bot, SDK
└─────────────────┘Example runner templates: code-reviewer, seo-analyst - copy and customize for your own runners.
Matrix bot dispatch (optional): Bridge Matrix chat rooms to runners for chat-triggered AI. Each room maintains persistent conversation context via SQLite -- on idle timeout, the session is compacted (summarised) and stored, so the next message resumes with full context.
# Setup Matrix bot (interactive wizard)
matrix-dispatch-helper.sh setup
# Map rooms to runners (each room = separate session)
matrix-dispatch-helper.sh map '!dev-room:server' code-reviewer
matrix-dispatch-helper.sh map '!seo-room:server' seo-analyst
# Start bot (daemon mode)
matrix-dispatch-helper.sh start --daemon
# In Matrix room: "!ai Review src/auth.ts for security issues"
# Manage sessions
matrix-dispatch-helper.sh sessions list
matrix-dispatch-helper.sh sessions statsSee: headless-dispatch.md for full documentation including parallel vs sequential decision guide, SDK examples, CI/CD integration, and custom agent configuration. matrix-bot.md for Matrix bot setup including Cloudron Synapse guide and session persistence.
Self-Improving Agent System
Agents that learn from experience and contribute improvements:
| Phase | Description | |-------|-------------| | Review | Analyze memory for success/failure patterns (memory system) | | Refine | Generate and apply improvements to agents | | Test | Validate in isolated OpenCode sessions | | PR | Contribute to community with privacy filtering |
Safety guardrails:
- Worktree isolation for all changes
- Human approval required for PRs
- Mandatory privacy filter (secretlint + pattern redaction)
- Dry-run default, explicit opt-in for PR creation
- Audit log to memory
Agent Testing Framework
Test agent behavior through isolated AI sessions with automated validation:
# Create a test suite
agent-test-helper.sh create my-tests
# Run tests (auto-detects claude or opencode CLI)
agent-test-helper.sh run my-tests
# Quick single-prompt test
agent-test-helper.sh run-one "What tools do you have?" --expect "bash"
# Before/after comparison for agent changes
agent-test-helper.sh baseline my-tests # Save current behavior
# ... modify agents ...
agent-test-helper.sh compare my-tests # Detect regressionsTest suites are JSON files with prompts and validation rules (expect_contains, expect_not_contains, expect_regex, min_length, max_length). Results are saved for historical tracking.
See: agent-testing.md subagent for full documentation and example test suites.
Voice Bridge - Talk to Your AI Agent
Speak naturally to your AI coding agent and hear it respond. The voice bridge connects your microphone to OpenCode via a fast local pipeline -- ask questions, give instructions, execute tasks, all by voice.
Mic → Silero VAD → Whisper MLX (1.4s) → OpenCode (4-6s) → Edge TTS (0.4s) → SpeakerRound-trip: ~6-8 seconds on Apple Silicon. The agent can edit files, run commands, create PRs, and confirm what it did -- all via voice.
Quick start:
# Start a voice conversation (installs deps automatically)
voice-helper.sh talk
# Choose engines and voice
voice-helper.sh talk whisper-mlx edge-tts en-GB-SoniaNeural
voice-helper.sh talk whisper-mlx macos-say # Offline mode
# Utilities
voice-helper.sh devices # List audio input/output devices
voice-helper.sh voices # List available TTS voices
voice-helper.sh benchmark # Test STT/TTS/LLM speeds
voice-helper.sh status # Check component availabilityFeatures:
| Feature | Details | |---------|---------| | Swappable STT | whisper-mlx (fastest on Apple Silicon), faster-whisper (CPU) | | Swappable TTS | edge-tts (best quality), macos-say (offline), facebookMMS (local) | | Voice exit | Say "that's all", "goodbye", "all for now" to end naturally | | STT correction | LLM sanity-checks transcription errors before acting (e.g. "test.txte" → "test.txt") | | Task execution | Full tool access -- edit files, git operations, run commands | | Session handback | Conversation transcript output on exit for calling agent context | | TUI compatible | Graceful degradation when launched from AI tool's Bash (no tty) |
How it works: The bridge uses opencode run --attach to connect to a running OpenCode server for low-latency responses (~4-6s vs ~30s cold start). It automatically starts opencode serve if not already running.
Requirements: Apple Silicon Mac (for whisper-mlx), Python 3.10+, internet (for edge-tts). The voice helper installs Python dependencies automatically into the S2S venv.
Speech-to-Speech Pipeline (Advanced)
For advanced use cases (custom LLMs, server/client deployment, multi-language, phone integration), the full huggingface/speech-to-speech pipeline is also available:
speech-to-speech-helper.sh setup # Install pipeline
speech-to-speech-helper.sh start --local-mac # Run on Apple Silicon
speech-to-speech-helper.sh start --cuda # Run on NVIDIA GPU
speech-to-speech-helper.sh start --server # Server mode (remote clients)Supported languages: English, French, Spanish, Chinese, Japanese, Korean (auto-detect or fixed).
Additional voice methods:
| Method | Description | |--------|-------------| | VoiceInk + Shortcut | macOS: transcription → OpenCode API → response | | iPhone Shortcut | iOS: dictate → HTTP → speak response | | Pipecat STS | Full voice pipeline: Soniox STT → AI → Cartesia TTS |
See: speech-to-speech.md for full component options, CLI parameters, and integration patterns (Twilio phone, video narration, voice-driven DevOps).
Scheduled Agent Tasks
Cron-based agent dispatch for automated workflows:
# Example: Daily SEO report at 9am
0 9 * * * ~/.aidevops/agents/scripts/runner-helper.sh run "seo-analyst" "Generate daily SEO report"See: TODO.md tasks t109-t118 for implementation status.
Requirements
Recommended Hardware
aidevops itself is lightweight (shell scripts + markdown), but AI model workloads benefit from capable hardware:
| Tier | Machine | CPU | RAM | GPU | Best For | |------|---------|-----|-----|-----|----------| | Minimum | Any modern laptop | 4+ cores | 8GB | None | Framework only, cloud AI APIs | | Recommended | Mac Studio / desktop | Apple M1+ or 8+ cores | 16GB+ | MPS (Apple) or NVIDIA 8GB+ | Local voice, browser automation, dev servers | | Power User | Workstation | 8+ cores | 32GB+ | NVIDIA 24GB+ VRAM | Full voice pipeline, local LLMs, parallel agents | | Server | Cloud GPU | Any | 16GB+ | A100 / H100 | Production voice, multi-user, batch processing |
Cloud GPU providers for on-demand GPU access: NVIDIA Cloud, Vast.ai, RunPod, Lambda. See .agents/tools/infrastructure/cloud-gpu.md for the full deployment guide (SSH setup, Docker, model caching, cost optimization).
Note: Most aidevops features (infrastructure management, SEO, code quality, Git workflows) require no GPU. GPU is only needed for local AI model inference (voice pipeline, vision models, local LLMs).
Software Dependencies
# Install dependencies (auto-detected by setup.sh)
brew install sshpass jq curl mkcert dnsmasq fd ripgrep # macOS
sudo apt-get install sshpass jq curl dnsmasq fd-find ripgrep # Ubuntu/Debian
# Generate SSH key
ssh-keygen -t ed25519 -C "[email protected]"File Discovery Tools
AI agents use fast file discovery tools for efficient codebase navigation:
| Tool | Purpose | Speed |
|------|---------|-------|
| fd | Fast file finder (replaces find) | ~10x faster |
| ripgrep | Fast content search (replaces grep) | ~10x faster |
Both tools respect .gitignore by default and are written in Rust for maximum performance.
Preference order for file discovery:
git ls-files '*.md'- Instant, git-tracked files onlyfd -e md- Fast, respects .gitignorerg --files -g '*.md'- Fast, respects .gitignore- Built-in glob tools - Fallback when bash unavailable
The setup script offers to install these tools automatically.
Comprehensive Service Coverage
Infrastructure & Hosting
- Hostinger: Shared hosting, domains, email
- Hetzner Cloud: VPS servers, networking, load balancers
- Closte: Managed hosting, application deployment
- Coolify Enhanced with CLI: Self-hosted PaaS with CLI integration
- Cloudron Enhanced with packaging guide: Server and app management platform with custom app packaging support
- Vercel Enhanced with CLI: Modern web deployment platform with CLI integration
- AWS: Cloud infrastructure support via standard protocols
- DigitalOcean: Cloud infrastructure support via standard protocols
Domain & DNS
- Cloudflare: DNS, CDN, security services
- Spaceship: Domain registration and management
- 101domains: Domain purchasing and DNS
- AWS Route 53: AWS DNS management
- Namecheap: Domain and DNS services
Development & Git Platforms with CLI Integration
- GitHub Enhanced with CLI: Repository management, actions, API, GitHub CLI (gh) integration
- GitLab Enhanced with CLI: Self-hosted and cloud Git platform with GitLab CLI (glab) integration
- Gitea Enhanced with CLI: Lightweight Git service with Gitea CLI (tea) integration
- Agno: Local AI agent operating system for DevOps automation
- Pandoc: Document conversion to markdown for AI processing
AI Orchestration Frameworks
- Langflow: Visual drag-and-drop builder for AI workflows (MIT, localhost:7860)
- CrewAI: Multi-agent teams with role-based orchestration (MIT, localhost:8501)
- AutoGen: Microsoft's agentic AI framework with MCP support (MIT, localhost:8081)
Video Creation
- Remotion: Programmatic video creation with React - animations, compositions, media handling, captions
- Video Prompt Design: AI video prompt engineering using the 7-component meta prompt framework for Veo 3 and similar models
- MuAPI: Multimodal AI API for image/video/audio/VFX generation, workflows, agents, music (Suno), and lip-sync - unified creative orchestration platform
- yt-dlp: YouTube video/audio/playlist/channel downloads, transcript extraction, and local file audio conversion via ffmpeg
WordPress Development
- LocalWP: WordPress development environment with MCP database access
- MainWP: WordPress site management dashboard
Git CLI Enhancement Features:
- .agents/scripts/github-cli-helper.sh: Advanced GitHub repository, issue, PR, and branch management
- .agents/scripts/gitlab-cli-helper.sh: Complete GitLab project, issue, MR, and branch management
- .agents/scripts/gitea-cli-helper.sh: Full Gitea repository, issue, PR, and branch management
Security & Code Quality
- gopass: GPG-encrypted secret management with AI-native wrapper (
aidevops secret) - subprocess injection + output redaction keeps secrets out of AI context - Vaultwarden: Password and secrets management
- SonarCloud: Security and quality analysis (A-grade ratings)
- CodeFactor: Code quality metrics (A+ score)
- Codacy: Multi-tool analysis (0 findings)
- CodeRabbit: AI-powered code reviews
- Snyk: Security vulnerability scanning
- Socket: Dependency security and supply chain protection
- Sentry: Error monitoring and performance tracking
- Cisco Skill Scanner: Security scanner for AI agent skills (prompt injection, exfiltration, malicious code)
- VirusTotal: Advisory threat intelligence via VT API v3 -- file hash scanning (70+ AV engines), domain/URL reputation checks for imported skills
- Secretlint: Detect exposed secrets in code
- OSV Scanner: Google's vulnerability database scanner
- Qlty: Universal code quality platform (70+ linters, auto-fixes)
- Gemini Code Assist: Google's AI-powered code completion and review
AI Prompt Optimization
- Augment Context Engine: Semantic codebase retrieval with deep code understanding
- Repomix: Pack codebases into AI-friendly context (80% token reduction with compress mode)
- DSPy: Framework for programming with language models
- DSPyGround: Interactive playground for prompt optimization
- TOON Format: Token-Oriented Object Notation - 20-60% token reduction for LLM prompts
Document Processing & OCR
- Document Creation Agent (
document-creation-helper.sh): Unified document format conversion, template-based creation, and OCR for scanned PDFs/images. Routes to the best available tool (pandoc, odfpy, LibreOffice, Tesseract, EasyOCR, GLM-OCR) based on format pair and availability. Supports 13+ formats (ODT, DOCX, PDF, MD, HTML, EPUB, PPTX, ODP, XLSX, ODS, RTF, CSV, TSV). - LibPDF: PDF form filling, digital signatures (PAdES B-B/T/LT/LTA), encryption, merge/split, text extraction
- MinerU: Layout-aware PDF-to-markdown/JSON conversion with OCR (109 languages), formula-to-LaTeX, and table extraction (53k+ stars, AGPL-3.0)
- Unstract: LLM-powered structured data extraction from unstructured documents (PDF, images, DOCX)
- GLM-OCR: Local OCR via Ollama - purpose-built for document text extraction (tables, forms, complex layouts) with zero cloud dependency
PDF/OCR Tool Selection:
| Need | Tool | Why |
|------|------|-----|
| Format conversion | Document Creation Agent | Auto-selects best tool, 13+ formats |
| Complex PDF to markdown | MinerU | Layout-aware, formulas, tables, 109-language OCR |
| Quick text extraction | GLM-OCR | Local, fast, no API keys, privacy-first |
| Structured JSON output | Unstract | Schema-based extraction, complex documents |
| Screen/window OCR | Peekaboo + GLM-OCR | peekaboo image --analyze --model ollama/glm-ocr |
| PDF text extraction | LibPDF | Native PDF parsing, no AI needed |
| Simple format conversion | Pandoc | Lightweight, broad format support |
| Scanned PDF OCR | Document Creation Agent | Auto-detects, routes to Tesseract/EasyOCR/GLM-OCR |
Quick start:
# Document creation agent
document-creation-helper.sh status # Check available tools
document-creation-helper.sh install --standard # Install core tools
document-creation-helper.sh convert report.pdf --to odt # Convert formats
document-creation-helper.sh convert scan.pdf --to md --ocr # OCR scanned PDF
document-creation-helper.sh template draft --type letter # Generate template
# GLM-OCR direct
ollama pull glm-ocr
ollama run glm-ocr "Extract all text" --images /path/to/document.pngSee .agents/tools/ocr/glm-ocr.md for batch processing, PDF workflows, and Peekaboo integration.
Communications
- Twilio: SMS, voice calls, WhatsApp, phone verification (Verify API), call recording & transcription
- Telfon: Twilio-powered cloud phone system with iOS/Android/Chrome apps for end-user calling interface
- Matrix: Self-hosted chat with bot integration for AI runner dispatch (
matrix-dispatch-helper.sh) - SimpleX Chat: Privacy-first messaging with AI bot gateway for agent dispatch (
simplex-bot/) - Matterbridge: Multi-platform chat bridge connecting 20+ platforms (Matrix, Discord, Telegram, Slack, IRC, WhatsApp, XMPP) with SimpleX adapter (
matterbridge-helper.sh)
Animation & Video
- Anime.js: Lightweight JavaScript animation library for CSS, SVG, DOM attributes, and JS objects
- Remotion: Programmatic video creation with React - create videos using code with 29 specialized rule files
- Video Prompt Design: Structured prompt engineering for AI video generation (Veo 3, 7-component framework, character consistency, audio design)
Voice AI
- Voice Bridge: Talk to your AI coding agent via speech -- Silero VAD → Whisper MLX → OpenCode → Edge TTS (~6-8s round-trip)
- Speech-to-Speech: Open-source modular voice pipeline (VAD → STT → LLM → TTS) with local GPU and cloud GPU deployment
- Pipecat: Real-time voice agent framework with Soniox STT, Cartesia TTS, and multi-LLM support
Performance & Monitoring
- PageSpeed Insights: Website performance auditing
- Lighthouse: Comprehensive web app analysis
- WebPageTest: Real-world performance testing from 40+ global locations with filmstrip, waterfall, and Core Web Vitals
- Updown.io: Website uptime and SSL monitoring
AI & Documentation
- Context7: Real-time documentation access for libraries and frameworks
- Local Models: Run AI models locally via llama.cpp for free, private, offline inference with HuggingFace GGUF model management (
local-model-helper.sh)
Local Development
- Localdev: Local development environment manager with dnsmasq, Traefik, and mkcert for production-like
.localdomains with HTTPS on port 443 (localdev-helper.sh)
MCP Integrations
Model Context Protocol servers for real-time AI assistant integration. The framework configures these MCPs for OpenCode (TUI, Desktop, and Extension for Zed/VSCode).
All Supported MCPs (20 available)
MCP packages are installed globally via bun install -g for instant startup (no npx registry lookups). Run setup.sh or aidevops update-tools to update to latest versions.
| MCP | Purpose | Tier | API Key Required | |-----|---------|------|------------------| | Augment Context Engine | Semantic codebase retrieval | Global | Yes (Augment account) | | Claude Code MCP | Claude as sub-agent | Global | No | | [Amazon Order History](https://githu
