aidevops
v3.14.39
Published
AI DevOps Framework - AI-assisted development workflows, code quality, and deployment automation
Maintainers
Readme
AI DevOps Framework
aidevops.sh — An OpenCode plugin and AI operations platform for launching and managing development, business, marketing, and creative projects. 13 specialist AI agents handle the automatable work across every domain so your time is preserved for real-world discovery and decisions that AI cannot yet reach.
Recommended setup: OpenCode + OpenAI models. GPT-5.5 is the preferred high-capability model for complex agent work; GPT-5.4 mini is the preferred fast, lower-cost model for triage and routine implementation. Claude models (Anthropic) remain fully supported, and other model providers are evaluated from time to time as their quality, latency, and cost profiles change.
"Scope a mission to redesign the landing pages — break it into milestones, dispatch workers in parallel, validate each milestone, and track budget across the whole project"
One conversation, autonomous project delivery — with enterprise-level security & quality-control.
Founded by Marcus Quinn on 9th November 2025 to help anyone level-up their AI & Open-Source game.
The Philosophy
Maximum value for your time and money. aidevops is built on these principles:
- Autonomous orchestration - An AI supervisor runs every 2 minutes, dispatching parallel workers, merging PRs, detecting stuck processes, and advancing multi-day missions — no human babysitting required
- Multi-domain agents - 13 specialist agents (code, automation, SEO, marketing, content, legal, sales, research, video, business, accounts, social media, health) with 900+ subagents loaded on demand
- Multi-model safety - High-stakes operations (force push, production deploy, data migration) are verified by a second cross-provider model before execution — different providers have different failure modes, so correlated hallucinations are rare
- Resource efficiency - Cost-aware model routing across OpenAI, Anthropic, Gemini, Cursor, and local models; project-type bundles auto-configure quality gates and model tiers, with budget tracking and burn-rate analysis
- Self-healing - When something breaks, diagnose the root cause, create tasks, and fix it. Every error is a live test case for a permanent solution
- Self-improving - When patterns of failure or inefficiency emerge, improve the framework itself. Session mining extracts learnings from past sessions automatically
- Gap awareness - Every session is an opportunity to identify what's missing — gaps in automation, documentation, coverage, or processes — and create tasks to fill them
- Git-first workflow - Protected branches, PR reviews, quality gates before merge. Sane vibe-coding through structure
- Parallel agents - Multiple AI sessions running full Ralph loops on separate branches via git worktrees
- Progressive discovery -
/slashcommands and@subagentmentions load knowledge into context only when needed - Open-source ready - Contribute to any project the same way you work on your own. Clone a repo, develop solutions to issues locally, and submit pull requests — the same full-loop workflow works everywhere
The result: an AI operations platform that manages projects across every business domain — absorbing everything automatable so you can focus on what matters.
Built on proven patterns: aidevops implements industry-standard agent design patterns - including multi-layer action spaces, context isolation, and iterative execution loops.
Why This Framework?
Beyond single-task AI. Most AI coding tools handle one conversation, one repo, one task at a time. aidevops manages your entire operation — dispatching parallel AI agents across multiple repos, routing tasks to domain-specialist agents, and running autonomously for days on multi-milestone projects.
What makes it different:
- Autonomous supervisor - AI pulse runs every 2 minutes: merges ready PRs, dispatches workers, kills stuck processes, advances missions, triages quality findings — no human in the loop
- Cross-domain intelligence - 13 agents spanning code, automation, business, marketing, legal, sales, content, video, research, SEO, social media, health, and accounts — each with domain expertise and specialist subagents
- Multi-model safety - Destructive operations verified by a second AI model from a different provider before execution
- 30+ service integrations - Hosting, Git platforms, DNS, security, monitoring, deployment, payments, communications
- Mission orchestration - Multi-day autonomous projects broken into milestones with validation, budget tracking, and automatic advancement
Quick Reference
- Purpose: AI-assisted DevOps automation framework
- Install:
npm install -g aidevops && aidevops update - Recommended runtime/models: OpenCode + OpenAI GPT-5.5 / GPT-5.4 mini
- Entry:
aidevopsCLI,~/.aidevops/agents/AGENTS.md - Stack: Bash scripts, TypeScript (Bun), MCP servers
Key Commands
aidevops init- Initialize in any projectaidevops update- Update frameworkaidevops auto-update- Automatic update polling (enable/disable/status)aidevops secret- Manage secrets (gopass encrypted, AI-safe)aidevops security- Full security assessment (posture, secrets, supply chain)/onboarding- Interactive setup wizard (in AI assistant)/design-artifact- Route artifact-first UI, deck, email, poster, and mobile mockup work/open-design- Manage the optional Open Design companion studio
Agent Structure
- 13 primary agents (Build+, Automate, SEO, Marketing, etc.) with specialist @subagents on demand
- 900+ subagent markdown files organized by domain
- 1,200+ helper scripts in
.agents/scripts/ - 90+ slash commands and workflow guides for common operations
Enterprise-Grade Quality & Security
Comprehensive DevOps framework with tried & tested services integrations, popular and trusted MCP servers, and enterprise-grade infrastructure quality assurance code monitoring and recommendations.
Security Notice
This framework provides agentic AI assistants with powerful infrastructure access. Use responsibly.
Capabilities: Execute commands, access credentials, modify infrastructure, interact with APIs Your responsibility: Use trusted AI providers, rotate credentials regularly, monitor activity
Security Commands
aidevops security # Run ALL checks (posture + hygiene + supply chain)
aidevops security posture # Interactive security posture setup (gopass, gh auth, SSH)
aidevops security status # Combined posture + hygiene summary
aidevops security scan # Secret hygiene & supply chain scan only
aidevops security scan-pth # Python .pth file audit (supply chain attack vector)
aidevops security scan-secrets # Plaintext credential locations only
aidevops security scan-deps # Unpinned dependency check
aidevops security check # Per-repo security posture assessment
aidevops security dismiss <id> # Dismiss a security advisory after taking actionRunning aidevops security with no arguments is the single command that covers everything — user security posture, plaintext secret detection, supply chain IoC scanning, and active advisories.
Security advisories are delivered via aidevops update and shown in the session greeting until dismissed. The scanner never exposes secret values — only file locations and key names. All remediation commands should be run in a separate terminal, not inside AI chat sessions.
Supply chain hardening: All Python dependencies are pinned to exact versions (==) to prevent malicious package upgrades. The .pth file auditor detects known supply chain attack indicators (e.g., the LiteLLM March 2026 PyPI compromise).
Quick Start
Installation Options
npm (recommended - verified provenance):
npm install -g aidevops && aidevops updateNote: npm suppresses postinstall output. The
&& aidevops updatedeploys agents to~/.aidevops/agents/. The CLI will remind you if agents need updating.
Bun (fast alternative):
bun install -g aidevops && aidevops updateHomebrew (macOS/Linux):
brew install marcusquinn/tap/aidevops && aidevops updateDirect from source (aidevops.sh):
bash <(curl -fsSL https://aidevops.sh/install)Manual (git clone):
git clone https://github.com/marcusquinn/aidevops.git ~/Git/aidevops
~/Git/aidevops/setup.shThat's it! The setup script will:
- Clone/update the repo to
~/Git/aidevops - Deploy agents to
~/.aidevops/agents/ - Install the
aidevopsCLI command - Configure your AI assistants automatically
- Offer to install Oh My Zsh (optional, opt-in) for enhanced shell experience
- Guide you through recommended tools (Tabby, Zed, Git CLIs)
- Ensure all PATH and alias changes work in both bash, zsh, and fish
- When Claude Code is installed, add a
claudealias that runsclaude --dangerously-skip-permissions(skips per-tool permission prompts). Re-running setup updates the alias automatically. To grant permissions per-session instead, press Shift-Tab inside Claude Code to cycle through permission modes (default → skip permissions → auto-approve).
New users: Start OpenCode and type /onboarding to configure your services interactively. OpenCode is the recommended tool for aidevops; pair it with OpenAI GPT-5.5 and GPT-5.4 mini for the best current results across agent tiers. The onboarding wizard will:
- Explain what aidevops can do
- Ask about your work to give personalized recommendations
- Show which services are configured vs need setup
- Guide you through setting up each service with links and commands
After installation, use the CLI:
aidevops status # Check what's installed
aidevops doctor # Detect duplicate installs and PATH conflicts
aidevops update # Update framework + check registered projects
aidevops auto-update # Manage automatic update polling (every 10 min)
aidevops init # Initialize aidevops in any project
aidevops features # List available features
aidevops repos # List/add/remove registered projects
aidevops detect # Scan for unregistered aidevops projects
aidevops upgrade-planning # Upgrade TODO.md/PLANS.md to latest templates
aidevops update-tools # Check and update installed tools
aidevops uninstall # Remove aidevopsOptional Design Artifact Studio
aidevops now treats design as a self-contained stack with optional peripherals:
- Google
DESIGN.mdstandard: AI-readable design systems with YAML tokens, linting, previews, and brand/style libraries (.agents/tools/design/design-md.md). - Design agents and skills: brand identity, palettes, UI inspiration, product UI rules, shadcn/Tailwind/UI skills, Nothing-style design, email rendering, Remotion/video, and browser-based UI verification.
- Artifact routing commands:
/design-artifactdecides whether to use aidevops-native implementation or a companion artifact studio;/open-designmanages optional Open Design workflows. - Verification gates: Playwright screenshots, accessibility/contrast checks, email rendering, deck export/fidelity checks, and media smoke tests before generated artifacts are accepted.
Optional companion: Open Design by nexu-io (Apache-2.0) is supported as a peripheral for live sandboxed previews, design-skill pickers, .od/ artifact workspaces, and HTML/PDF/PPTX/ZIP-style exports. aidevops remains canonical for agents, skill ingestion, Google DESIGN.md, local hosting, and verification.
# Inspect optional companion status
open-design-helper.sh status
# Print safe install plan only
open-design-helper.sh install
# Install alongside aidevops only after opting in
open-design-helper.sh install --execute
# Start through aidevops local HTTPS if Open Design only prints localhost
open-design-helper.sh start --https-local open-design
# → https://open-design.local when localdev is configuredImported Open Design skills are not copied verbatim. They are evaluated through aidevops build-agent methodology, deduplicated against existing agents, flattened into aidevops *-skill.md structure, attributed to upstream, and given verification commands. See .agents/tools/design/open-design-ingestion.md for the full skill-value matrix.
Project tracking: When you run aidevops init, the project is automatically registered in ~/.config/aidevops/repos.json. Running aidevops update checks all registered projects for version updates.
Use aidevops in Any Project
Initialize aidevops features in any git repository:
cd ~/your-project
aidevops init # Enable all features
aidevops init planning # Enable only planning
aidevops init planning,time-tracking # Enable specific featuresThis creates:
.aidevops.json- Configuration with enabled features.agentssymlink →~/.aidevops/agents/TODO.md- Quick task tracking with time estimatestodo/PLANS.md- Complex execution plans.beads/- Task graph database (if beads enabled)
Available features: planning, git-workflow, code-quality, time-tracking, beads
Per-repo platform setup
After aidevops init registers a new repo, run /setup-git in your AI assistant
to apply per-repo platform secrets. Most notably, this sets SYNC_PAT — a
GitHub Actions secret that lets issue-sync.yml push TODO.md auto-completion
past branch protection.
This is distinct from /onboarding (per-account credentials like gh auth login):
GitHub Actions secrets are scoped per-repo, so each repo needs its own. You need
gh auth login to succeed before any per-repo helper can run, so /onboarding
comes first, /setup-git second.
Run /setup-git again whenever you register a new repo with aidevops repos add
or when a SYNC_PAT advisory appears in the session greeting toast. If you skip
this step, issue-sync.yml will post a remediation comment when it hits branch
protection — /setup-git walks through the fix.
Upgrade Planning Files
When aidevops templates evolve, upgrade existing projects to the latest format:
aidevops upgrade-planning # Interactive upgrade with backup
aidevops upgrade-planning --dry-run # Preview changes without modifying
aidevops upgrade-planning --force # Skip confirmation promptThis preserves your existing tasks while adding TOON-enhanced parsing, dependency tracking, and better structure.
Automatic detection: aidevops update now scans all registered projects for outdated planning templates (comparing TOON meta version numbers) and offers to upgrade them in-place with backups.
Task Graph Visualization with Beads
Beads provides task dependency tracking and graph visualization:
aidevops init beads # Enable beads (includes planning)Task Dependencies:
- [ ] t001 First task
- [ ] t002 Second task blocked-by:t001
- [ ] t001.1 Subtask of t001| Syntax | Meaning |
|--------|---------|
| blocked-by:t001 | Task waits for t001 to complete |
| blocks:t002 | This task blocks t002 |
| t001.1 | Subtask of t001 (hierarchical) |
Commands:
| Command | Purpose |
|---------|---------|
| /ready | Show tasks with no open blockers |
| /list-verify | List verification queue (pending, passed, failed) |
| /sync-beads | Sync TODO.md/PLANS.md with Beads graph |
| bd list | List all tasks in Beads |
| bd ready | Show ready tasks (Beads CLI) |
| bd graph <id> | Show dependency graph for an issue |
Architecture: aidevops markdown files (TODO.md, PLANS.md) are the source of truth. Beads syncs from them for visualization.
Optional Viewers: Beyond the bd CLI, there are community viewers for richer visualization:
beads_viewer(Python TUI) - PageRank, critical path analysisbeads-ui(Web) - Live updates in browserbdui(React/Ink TUI) - Modern terminal UIperles(Rust TUI) - BQL query language
See .agents/tools/task-management/beads.md for complete documentation and installation commands.
Your AI assistant now has agentic access to 30+ service integrations.
OpenAI Models in OpenCode (Recommended)
OpenCode with OpenAI is the current recommended aidevops setup. Use GPT-5.5 for complex reasoning, architecture, security-sensitive review, and hard agent tiers; use GPT-5.4 mini for fast triage, routine implementation, retries, and lower-cost worker throughput.
Authenticate via the pool:
aidevops model-accounts-pool add openai
# Restart OpenCode after addingWhy this is the default:
- Best current cross-tier results — strongest observed balance across interactive Build+, workers, review, and dispatch tiers
- Good cost/latency split — GPT-5.5 for depth, GPT-5.4 mini for high-volume routine work
- Provider isolation — OpenAI accounts rotate independently from Anthropic, Google, Cursor, and local providers
- Fallback-friendly — Claude, Gemini, Cursor, and local models remain available when a task or rate-limit profile calls for them
OpenCode Anthropic OAuth (Supported)
OpenCode includes Anthropic OAuth authentication natively — no API key needed. OAuth is covered by your Claude Pro/Max subscription at zero additional cost.
Authenticate via the pool (recommended):
aidevops model-accounts-pool add anthropic
# Opens browser OAuth flow — no API key required
# Restart OpenCode after addingOr via the OpenCode TUI:
Open OpenCode → Ctrl+A → Select Anthropic → Login with Claude.ai → follow browser OAuth flow.
Note:
opencode auth loginprompts for an API key, not OAuth. Use the commands above for subscription-based OAuth access.
Benefits:
- Still fully supported for users who prefer Claude models or already have Claude Pro/Max
- Zero marginal cost for Claude Pro/Max subscribers (covered by subscription)
- Automatic token refresh — no manual re-authentication needed
- Multiple accounts — add more accounts to the pool for automatic rotation when one hits rate limits
- Beta features enabled — extended thinking modes and latest features
Cursor Models via Pool Proxy
Access Cursor Pro models (Composer 2, Claude 4.6 Opus/Sonnet, GPT-5.x, Gemini 3.1 Pro) in OpenCode through a local gRPC proxy that translates OpenAI-compatible requests to Cursor's protobuf/HTTP2 protocol.
Setup:
# Add your Cursor account to the pool (reads from local Cursor IDE)
oauth-pool-helper.sh add cursor
# Restart OpenCode — Cursor models appear in Ctrl+T model pickerHow it works:
- Reads Cursor credentials from the local IDE state database
- Starts a gRPC proxy that speaks Cursor's native protocol (not the cursor-agent CLI)
- Discovers available models via gRPC and registers them as an OpenCode provider
- Supports true streaming, tool calling, and automatic token refresh
- Falls back gracefully if no Cursor accounts are in the pool
Benefits:
- Zero additional cost for Cursor Pro subscribers
- True streaming — responses stream as they arrive (not buffered)
- Tool calling — Cursor's native MCP tool protocol works through the proxy
- Model discovery — automatically detects all models available to your account
- Pool rotation — multiple accounts with LRU rotation and 429 failover
Google AI Pool (Gemini CLI / Vertex AI)
Use your Google AI Pro, AI Ultra, or Workspace subscription for Gemini models. Tokens are injected as ADC bearer tokens that Gemini CLI, Vertex AI SDK, and the Gemini API pick up automatically.
Setup:
# Add your Google account to the pool (browser OAuth flow)
aidevops model-accounts-pool add google
# Restart OpenCode — token is injected as GOOGLE_OAUTH_ACCESS_TOKENSupported plans:
- Google AI Pro (~$25/mo) — daily Gemini CLI limits
- Google AI Ultra (~$65/mo) — higher daily limits
- Google Workspace with Gemini add-on — enterprise daily limits
Isolation guarantee: Google auth failures never affect Anthropic/OpenAI/Cursor providers. A Google 429 or auth error only puts the Google pool into cooldown.
GitHub AI Agent Integration
Enable AI-powered issue resolution directly from GitHub. Comment /oc fix this on any issue and the AI creates a branch, implements the fix, and opens a PR.
Security-first design - The workflow includes:
- Trusted users only (OWNER/MEMBER/COLLABORATOR)
ai-approvedlabel required on issues before AI processing- Prompt injection pattern detection
- Audit logging of all invocations
- 15-minute timeout and rate limiting
Quick setup:
# 1. Install the OpenCode GitHub App
# Visit: https://github.com/apps/opencode-agent
# 2. Add API key secret for your chosen provider
# Repository → Settings → Secrets → OPENAI_API_KEY or ANTHROPIC_API_KEY
# 3. Create required labels
gh label create "ai-approved" --color "0E8A16" --description "Issue approved for AI agent"
gh label create "security-review" --color "D93F0B" --description "Requires security review"The secure workflow is included at .github/workflows/opencode-agent.yml.
Usage:
| Context | Command | Result |
|---------|---------|--------|
| Issue (with ai-approved label) | /oc fix this | Creates branch + PR |
| Issue | /oc explain this | AI analyzes and replies |
| PR | /oc review this PR | Code review feedback |
| PR Files tab | /oc add error handling here | Line-specific fix |
See .agents/tools/git/opencode-github-security.md for the full security documentation.
Supported AI tool: OpenCode is the recommended and tested AI coding tool for aidevops. All features, agents, and workflows are designed and tested for OpenCode first. We recommend OpenAI models for the best current results across all agent tiers: GPT-5.4 mini for fast triage/routine work and GPT-5.5 for complex implementation, review, and reasoning. Claude models (Anthropic) remain fully supported, and other providers are tested as their capabilities change.
Recommended stack:
- OpenCode - The recommended AI coding agent. Powerful agentic TUI/CLI with native MCP support, Tab-based agent switching, LSP integration, plugin ecosystem, and excellent DX. All aidevops features are designed and tested for OpenCode first.
- OpenCode Zen - Free tier of OpenCode with included models. Start working with AI straight away at no cost -- no API keys or subscriptions required.
- OpenAI GPT-5.5 / GPT-5.4 mini - Recommended model pair for aidevops today. Use GPT-5.5 for complex reasoning and high-impact agent tiers; use GPT-5.4 mini for triage, routine implementation, and cost-efficient parallel workers.
- Claude (Anthropic) - Fully supported alternative provider. Claude models remain useful for fallback, cross-provider verification, and users with Claude Pro/Max OAuth access.
- Tabby - Recommended terminal. Colour-coded Profiles per project/repo, auto-syncs tab title with git repo/branch.
- Zed - Recommended editor. High-performance with AI integration (use with the OpenCode Agent Extension).
Troubleshooting Auth
If you see "Anthropic Key Missing", "OpenAI Key Missing", or the model stops responding, run these commands from any terminal — no working model session required.
Step 1 — Check pool health
aidevops model-accounts-pool status # counts: available / rate-limited / auth-error
aidevops model-accounts-pool check # live token validity test per accountStep 2 — Fix based on what you see
| Symptom | Command |
|---------|---------|
| OpenAI account shows rate-limited | aidevops model-accounts-pool rotate openai |
| Anthropic account shows rate-limited | aidevops model-accounts-pool rotate anthropic |
| All accounts in cooldown | aidevops model-accounts-pool reset-cooldowns |
| OpenAI account shows auth-error | aidevops model-accounts-pool add openai (re-auth) |
| Anthropic account shows auth-error | aidevops model-accounts-pool add anthropic (re-auth) |
| Pool is empty (no accounts) | aidevops model-accounts-pool add openai |
| Recently re-authenticated, still broken | aidevops model-accounts-pool assign-pending openai |
| Google Gemini CLI rate-limited | aidevops model-accounts-pool rotate google |
| Google token expired | aidevops model-accounts-pool add google (re-auth) |
Step 3 — If still broken, re-add the account
aidevops model-accounts-pool add openai # ChatGPT Plus/Pro
aidevops model-accounts-pool add anthropic # Claude Pro/Max — opens browser OAuth
aidevops model-accounts-pool add cursor # Cursor Pro (reads from local IDE)
aidevops model-accounts-pool add google # Google AI Pro/Ultra/Workspace — browser OAuth
aidevops model-accounts-pool import claude-cli # Import from existing Claude CLI authRestart OpenCode after any add, rotate, or reset-cooldowns to pick up the new credentials.
Full command reference
aidevops model-accounts-pool status # Pool health at a glance
aidevops model-accounts-pool list # Per-account detail + expiry
aidevops model-accounts-pool check # Live API validity test
aidevops model-accounts-pool rotate [provider] # Switch to next available account NOW
aidevops model-accounts-pool reset-cooldowns # Clear all rate-limit cooldowns
aidevops model-accounts-pool assign-pending <p># Assign stranded pending token
aidevops model-accounts-pool remove <p> <email># Remove an accountNote:
reset-cooldownsclears cooldowns in the pool file. If OpenCode is already running, the in-memory token endpoint cooldown is only cleared when OpenCode restarts or when you use the/model-accounts-pool reset-cooldownsslash command inside an active session.
If you prefer guided help: Open OpenCode with a free model (OpenCode Zen includes free models that don't require any API key or subscription) and run the auth troubleshooting agent by typing:
@auth-troubleshootingThe agent contains the full recovery flow and symptom table. Free models work fine for this — no paid subscription needed.
Terminal Tab Title Sync
Your terminal tab/window title automatically shows repo/branch context when working in git repositories. This helps identify which codebase and branch you're working on across multiple terminal sessions.
Supported terminals: Tabby, cmux, iTerm2, Kitty, Alacritty, WezTerm, Hyper, and most xterm-compatible terminals.
How it works: The pre-edit-check.sh script's primary role is enforcing git workflow protection (blocking edits on main/master branches). As a secondary, non-blocking action, it updates the terminal title via escape sequences. No configuration needed - it's automatic.
Example format: {repo}/{branch-type}/{description}
See .agents/tools/terminal/terminal-title.md for customization options.
Companion tool:
- claude-code CLI - Called from within OpenCode for sub-tasks and headless dispatch
Collaborator compatibility: Projects initialized with aidevops init include pointer files (.cursorrules, .windsurfrules, etc.) that reference AGENTS.md, helping collaborators using other editors find project context. aidevops does not install into or configure those tools.
Repo courtesy files: aidevops init scaffolds standard repo files if they don't exist: README.md, LICENCE (MIT), CHANGELOG.md, CONTRIBUTING.md, SECURITY.md, CODE_OF_CONDUCT.md. Author name and email are auto-detected from git config. Existing files are never overwritten.
Core Capabilities
AI-First Infrastructure Management:
- SSH server access, remote command execution, API integrations
- DNS management, application deployment, email monitoring
- Git platform management, domain purchasing, setup automation
- WordPress management, credential security, code auditing
Autonomous Orchestration:
- Pulse supervisor - Autonomous AI supervisor runs every 2 minutes via launchd — merges ready PRs, dispatches workers, kills stuck processes, detects orphaned PRs, syncs TODO state with GitHub, triages quality findings, and advances missions. No human in the loop
- Missions - Multi-day autonomous projects:
/missionscopes a high-level goal into milestones and features. The pulse dispatches workers, validates milestones, tracks budget, and advances through the project automatically (mission-dashboard-helper.sh) - Multi-model verification - Destructive operations (force push, production deploy, data migration) are verified by a second AI model from a different provider before execution. Different providers have different failure modes, so correlated hallucinations are rare
- Supervisor - SQLite state machine dispatches tasks to parallel AI agents with retry cycles, batch management, and cron scheduling
- Runners - Named headless agent instances with persistent identity, instructions, and memory namespaces
/runnerscommand - Batch dispatch from task IDs, PR URLs, or descriptions with concurrency control and progress monitoring- Mailbox - SQLite-backed inter-agent messaging for coordination across parallel sessions
- Worktree isolation - Each agent works on its own branch in a separate directory, no merge conflicts
- Budget tracking - Append-only cost log (
budget-tracker-helper.sh) with burn-rate analysis and/budget-analysiscommand for model routing decisions - Observability - LLM request capture plugin (
observability.mjs) for cost tracking, performance analysis, and debugging - Rate limits - Per-provider rate limit configuration (
rate-limits.json) with throttle-risk warnings
Project Intelligence:
- Bundles - Project-type presets that auto-configure model tiers, quality gates, and agent routing per repo. 7 built-in bundles (web-app, library, cli-tool, content-site, infrastructure, agent, schema) with auto-detection from marker files (
bundle-helper.sh) - TTSR rules - Soft rule engine (
ttsr-rule-loader.sh) with.agents/rules/directory for AI output correction (e.g., no-edit-on-main, no-glob-for-discovery) - Cross-review -
/cross-reviewdispatches the same prompt to multiple AI models in parallel, diffs results, and optionally auto-scores via a judge model - Local models - Run AI models locally via llama.cpp for free, private, offline inference (
local-model-helper.sh) with HuggingFace GGUF model management - Tech stack lookup -
/tech-stackdetects technology stacks of URLs or finds sites using specific technologies (Wappalyzer, httpx, nuclei, BuiltWith) - IP reputation -
ip-reputation-helper.shchecks IP addresses against multiple reputation databases (Spamhaus, ProxyCheck, AbuseIPDB) before VPS purchase or deployment
Conversational Memory & Entity System:
- Entity memory - Cross-channel relationship continuity (
entity-helper.sh): people, agents, and services tracked across Matrix, SimpleX, email, and CLI with versioned profiles - Conversational memory - Per-conversation context management (
conversation-helper.sh): idle detection, immutable summaries, tone profile extraction - Three-layer architecture - Layer 0 (immutable raw log), Layer 1 (tactical summaries), Layer 2 (strategic entity profiles) in shared SQLite
Communications:
- SimpleX bot - Channel-agnostic gateway with SimpleX Chat as first adapter for AI agent dispatch (
simplex-bot/) - Matterbridge - Multi-platform chat bridge connecting 20+ platforms including Matrix, Discord, Telegram, Slack, IRC, WhatsApp, XMPP (
matterbridge-helper.sh) - Localdev - Local development environment manager with dnsmasq, Traefik, mkcert for production-like
.localdomains with HTTPS (localdev-helper.sh)
MCP Toolkit:
- MCPorter - Discover, call, compose, and generate CLIs/typed clients for MCP servers (
mcporternpm package) - OpenAPI Search - Search and explore any OpenAPI specification via MCP (zero install, Cloudflare Worker)
- Cloudflare Code Mode - Full Cloudflare API (2,500+ endpoints) via 2 tools in ~1,000 tokens
Unified Interface:
- Standardized commands across all providers
- Automated SSH configuration and multi-account support for all services
- Security-first design with comprehensive logging, code quality reviews, and continual feedback-based improvement
Quality Control & Monitoring:
- Multi-Platform Analysis: SonarCloud, CodeFactor, Codacy, CodeRabbit, Qlty, Gemini Code Assist, Snyk
- Performance Auditing: PageSpeed Insights, Lighthouse, WebPageTest, Core Web Vitals (
/performancecommand) - SEO Toolchain: 40+ SEO subagents including Semrush, Ahrefs, ContentKing, Screaming Frog, Bing Webmaster Tools, Rich Results Test, programmatic SEO, analytics tracking, schema validation, content analysis, keyword mapping, and AI readiness
- SEO Debugging: Open Graph validation, favicon checker, social preview testing
- Email Deliverability: SPF/DKIM/DMARC/MX validation, blacklist checking
- Uptime Monitoring: Updown.io integration for website and SSL monitoring
Imported Skills
aidevops includes curated skills imported from external sources. Skills support automatic update tracking:
| Skill | Source | Description | |-------|--------|-------------| | cloudflare-platform | dmmulroy/cloudflare-skill | 60 Cloudflare products: Workers, Pages, D1, R2, KV, Durable Objects, AI, networking, security | | heygen | heygen-com/skills | AI avatar video creation API: avatars, voices, video generation, streaming, webhooks | | remotion | remotion-dev/skills | Programmatic video creation with React, animations, rendering | | video-prompt-design | snubroot/Veo-3-Meta-Framework | AI video prompt engineering - 7-component meta prompt framework for Veo 3 | | animejs | animejs.com | JavaScript animation library patterns and API (via Context7) | | caldav-calendar | ClawdHub | CalDAV calendar sync via vdirsyncer + khal (iCloud, Google, Fastmail, Nextcloud) | | proxmox-full | ClawdHub | Complete Proxmox VE hypervisor management via REST API |
CLI Commands:
aidevops skill add <owner/repo> # Import a skill from GitHub
aidevops skill add clawdhub:<slug> # Import a skill from ClawdHub
aidevops skill list # List imported skills
aidevops skill check # Check for upstream updates
aidevops skill update [name] # Update specific or all skills
aidevops skill scan [name] # Security scan skills (Cisco Skill Scanner)
aidevops skill remove <name> # Remove an imported skillSkills are registered in ~/.aidevops/agents/configs/skill-sources.json with upstream tracking for update detection.
Security Scanning:
Imported skills are automatically security-scanned using Cisco Skill Scanner when installed. Scanning runs on both initial import and updates -- pulling a new version of a skill triggers the same security checks as the first import. CRITICAL/HIGH findings block the operation; MEDIUM/LOW findings warn but allow. Telemetry is disabled - no data is sent to third parties.
When a VirusTotal API key is configured (aidevops secret set VIRUSTOTAL_MARCUSQUINN), an advisory second layer scans file hashes against 70+ AV engines and checks domains/URLs referenced in skill content. VT scans are non-blocking -- the Cisco scanner remains the security gate.
| Scenario | Security scan runs? | CRITICAL/HIGH blocks? |
|----------|--------------------|-----------------------|
| aidevops skill add <source> | Yes | Yes |
| aidevops skill update [name] | Yes | Yes |
| aidevops skill add <source> --force | Yes | Yes |
| aidevops skill add <source> --skip-security | Yes (reports only) | No (warns) |
| aidevops skill scan [name] | Yes (standalone) | Report only |
The --force flag only controls file overwrite behavior (replacing an existing skill without prompting). To bypass security blocking, use --skip-security explicitly -- this separation ensures that routine updates and re-imports never silently skip security checks.
Scan results are logged to .agents/SKILL-SCAN-RESULTS.md automatically on each batch scan and skill import, providing a transparent audit trail of security posture over time.
Browse community skills: skills.sh | ClawdHub | Specification: agentskills.io
Reference:
- Agent Skills Specification - The open format for SKILL.md files
- skills.sh Leaderboard - Discover popular community skills
- ClawdHub - Skill registry with vector search (OpenClaw ecosystem)
- vercel-labs/add-skill - The upstream CLI tool (aidevops uses its own implementation)
- anthropics/skills - Official Anthropic example skills
- agentskills/agentskills - Specification source and reference library
Agent Sources (Private Repos)
Sync agents from private Git repositories into the framework. Private repos keep their own agents, helper scripts, and slash commands — aidevops sources sync deploys them alongside the core agents.
aidevops sources add ~/Git/my-private-agents # Register a local repo
aidevops sources add-remote [email protected]:u/r.git # Clone and register a remote repo
aidevops sources list # List configured sources
aidevops sources sync # Sync all sources
aidevops sources remove my-private-agents # Remove a sourceHow it works: Private repos contain a .agents/ directory with agent subdirectories. Agents with mode: primary in their frontmatter are symlinked to the agents root for auto-discovery as primary agent tabs. Markdown files with agent: frontmatter are deployed as /slash commands. All sources sync automatically during aidevops update.
Reference: .agents/aidevops/agent-sources.md
Agent Design Patterns
aidevops implements proven agent design patterns identified by Lance Martin (LangChain).
| Pattern | Description | aidevops Implementation |
|---------|-------------|------------------------|
| Give Agents a Computer | Filesystem + shell for persistent context | ~/.aidevops/.agent-workspace/, 1,200+ helper scripts |
| Multi-Layer Action Space | Few tools, push actions to computer | Per-agent MCP filtering (~12-20 tools each) |
| Knowledge Graph Routing | Indexed, cross-referenced agents instead of isolated skills | subagent-index.toon maps 900+ agents by domain, purpose, and dependency — agents discover related context through the graph, not just their own file |
| Progressive Disclosure | Load context on-demand | Subagent routing with content summaries, YAML frontmatter, read-on-demand |
| Offload Context | Write results to filesystem | .agent-workspace/work/[project]/ for persistence |
| Cache Context | Prompt caching for cost | Stable instruction prefixes |
| Isolate Context | Sub-agents with separate windows | Subagent files with specific tool permissions |
| Multi-Agent Orchestration | Coordinate parallel agents | TOON mailbox, agent registry, supervisor dispatch |
| Compaction Resilience | Preserve context across compaction | OpenCode plugin injects dynamic state at compaction time |
| Ralph Loop | Iterative execution until complete | /full-loop, full-loop-helper.sh |
| Evolve Context | Learn from sessions | /remember, /recall with SQLite FTS5 + opt-in semantic search |
| Pattern Tracking | Learn what works/fails | /patterns command, memory-helper.sh |
| Token-Efficient Serialisation | Minimise context overhead for structured data | TOON format — 20-60% token reduction vs JSON/YAML for agent indexes, registries, and data exchange |
| Cost-Aware Routing | Match model to task complexity | model-routing.md with provider-aware tier guidance, /route command |
| Model Comparison | Compare models side-by-side | /compare-models (live data), /compare-models-free (offline) |
| Response Scoring | Evaluate actual model outputs | /score-responses with structured criteria |
Key insight: Context is a finite resource with diminishing returns. aidevops treats every token as precious - loading only what's needed, when it's needed.
See .agents/aidevops/architecture.md for detailed implementation notes and references.
Multi-Agent Orchestration
Run multiple AI agents in parallel on separate branches, coordinated through a lightweight mailbox system. Each agent works independently in its own git worktree while the supervisor manages task distribution and status reporting.
Architecture:
Supervisor (pulse loop)
├── Agent Registry (TOON format - who's active, what branch, idle/busy)
├── Mailbox System (SQLite WAL-mode, indexed queries)
│ ├── task_assignment → worker inbox
│ ├── status_report → coordinator outbox
│ └── broadcast → all agents
└── Model Routing (tier-based: GPT-5.4 mini / GPT-5.5 / provider fallbacks)Key components:
| Component | Script | Purpose |
|-----------|--------|---------|
| Mailbox | mail-helper.sh | SQLite-backed inter-agent messaging (send, check, broadcast, archive) |
| Supervisor | supervisor-helper.sh | Autonomous multi-task orchestration with SQLite state machine, batches, retry cycles, cron scheduling, auto-pickup from TODO.md |
| Registry | mail-helper.sh register | Agent registration with role, branch, worktree, heartbeat |
| Model routing | model-routing.md, /route | Cost-aware routing across OpenAI, Anthropic, Gemini, Cursor, Grok, and local providers |
| Budget tracking | budget-tracker-helper.sh | Append-only cost log for model routing decisions |
| Observability | observability.mjs plugin | LLM request capture for cost tracking and performance analysis |
How it works:
- Each agent registers on startup (
mail-helper.sh register --role worker) - Supervisor runs periodic pulses (
supervisor-helper.sh pulse) - Pulse collects status reports, dispatches queued tasks to idle workers
- Agents send completion reports back via mailbox
- SQLite WAL mode +
busy_timeouthandles concurrent access (79x faster than previous file-based system)
Compaction plugin (.agents/plugins/opencode-aidevops/): When OpenCode compacts context (at ~200K tokens), the plugin injects current session state - agent registry, pending mailbox messages, git context, and relevant memories - ensuring continuity across compaction boundaries.
Custom system prompt (.agents/prompts/build.txt): Based on upstream OpenCode with aidevops-specific overrides for tool preferences, professional objectivity, and per-model reinforcements for weaker models.
Subagent index (.agents/subagent-index.toon): Compressed TOON routing table listing all agents, subagents, workflows, and scripts with model tier assignments - enables fast agent discovery without loading full markdown files.
Autonomous Orchestration & Parallel Agents
Why this matters: Long-running tasks -- batch PR reviews, multi-site audits, large refactors, multi-day feature projects -- are where AI agents deliver the most value. Instead of babysitting one task at a time, the supervisor dispatches work to parallel agents, each in its own git worktree, with automatic retry, progress tracking, and batch completion reporting.
Pulse Supervisor: Autonomous AI Operations
The pulse is the heartbeat of aidevops — an autonomous AI supervisor that runs every 2 minutes via launchd. There is no human at the terminal. It manages the entire development pipeline across all repos registered with pulse: true.
What it does each cycle:
| Phase | Action |
|-------|--------|
| Capacity check | Circuit breaker, dynamic worker slots calculated from available RAM |
| Merge ready PRs | Green CI + no blocking reviews → squash merge (free — no worker slot needed) |
| Fix failing PRs | Dispatch a worker to fix CI failures or address review feedback |
| Detect stuck work | PRs open 6+ hours with no activity → flag or close and re-file |
| Dispatch workers | Route open issues to available worker slots, respecting priority and blocked-by: dependencies |
| Advance missions | Check active multi-day missions, dispatch features, validate milestones, track budget |
| Triage quality | Read daily quality sweep findings (ShellCheck, SonarCloud, Codacy, CodeRabbit), create issues for actionable findings |
| Sync TODOs | Create GitHub issues for unsynced TODO entries, commit ref changes |
| Kill stuck workers | Workers running 3+ hours with no PR are killed to free slots |
| Detect orphaned PRs | Open PRs with no active worker and no activity for 6+ hours are flagged for re-dispatch |
Operational intelligence:
- Struggle-ratio — computes
messages / max(1, commits)for each active worker. High ratio (>30) with >30 min elapsed and zero commits flags the worker as "struggling". Ratio >50 after 1 hour flags "thrashing". Informational signal — the supervisor LLM decides the action (kill, wait, re-dispatch with more context) - Circuit breaker — prevents cascading failures by tracking success/failure rates and tripping when error rate exceeds threshold
- Dynamic concurrency — worker slot count adapts to available RAM, not a hardcoded constant
- Stale assignment recovery — tasks assigned to workers that died (no active process, no PR, 3+ hours stale) are automatically unassigned and made available for re-dispatch
- Priority ordering — green PRs (free merge) > failing PRs (closer to done) > high-priority/bug issues > active mission features > product repos > smaller tasks > oldest
The pulse is an LLM, not a script. It reads issue bodies, assesses context, and uses judgment. When it encounters something unexpected — an issue body that says "completed", a task with no clear description, a label that doesn't match reality — it handles it the way a competent human manager would.
# Pulse runs automatically via launchd (every 2 minutes)
# Manual trigger:
opencode run "/pulse"See: .agents/scripts/commands/pulse.md for the full supervisor specification.
Missions: Multi-Day Autonomous Projects
Missions are the highest-level orchestration primitive — autonomous multi-day projects that break a high-level goal into milestones, features, and validation criteria. The pulse supervisor advances them automatically.
# Scope a mission interactively
/mission "Redesign the landing pages for mobile-first with A/B testing"How missions work:
/missionscopes the goal into milestones with features and acceptance criteria- Each feature becomes a TODO entry tagged
mission:mNNNwith a GitHub issue - The pulse dispatches features as regular workers (respecting
MAX_WORKERS) - When all features in a milestone complete, the pulse dispatches a validation worker to verify integration
- Passed milestones advance automatically — the next milestone's features are dispatched
- Budget tracking pauses the mission if any category exceeds the alert threshold (default 80%)
Two execution modes:
| Mode | Workflow | Best for | |------|----------|----------| | Full | Worktree + PR per feature, standard review flow | Production code, collaborative projects | | POC | Direct commits, skip ceremony | Prototypes, experiments, proof-of-concept |
Mission state is tracked in a JSON file committed to the repo. Each pulse cycle reads the state, acts on it, and commits updates — so any session (or the next pulse) can pick up where the last one left off.
See: .agents/workflows/mission-orchestrator.md for the full orchestrator specification, .agents/scripts/commands/dashboard.md for the mission progress dashboard.
Multi-Model Verification: Cross-Provider Safety
High-stakes operations are verified by a second AI model from a different provider before execution. This catches single-model hallucinations before destructive operations cause irreversible damage.
When verification triggers:
| Risk Level | Examples | Action |
|------------|----------|--------|
| Critical | git push --force to main, DROP DATABASE, production deploy | Blocked unless second model agrees |
| High | Force push to feature branch, data migration, secret exposure | Warned, verification recommended |
| Medium | Bulk file deletion, config changes | Logged |
| Low | Normal edits, test runs | No verification |
How it works:
pre-edit-check.shscreens operations against the high-stakes taxonomy- For critical/high operations,
verify-operation-helper.shsends the operation context to a second model (different provider than the primary) - The verifier independently assesses whether the operation is safe
- On disagreement, the operation is blocked (critical) or warned (high)
- All verification decisions are logged for audit
Why cross-provider? Same-provider models share training data and failure modes. A GPT hallucination is unlikely to be reproduced by Claude or Gemini, and vice versa. The verifier uses the cheapest suitable model tier, so cost is minimal per check.
Configuration: Per-repo via .agents/reference/high-stakes-operations.md. Opt-out with VERIFY_ENABLED=false (not recommended).
See: .agents/tools/verification/parallel-verify.md for the verification agent specification.
Project Bundles: Auto-Configuration
Bundles are project-type presets that auto-configure model tiers, quality gates, and agent routing per repo. Instead of manually configuring each project, bundles detect what kind of project you're working on and apply sensible defaults.
Built-in bundles:
| Bundle | Auto-detected by | Model default | Quality gates | Agent routing |
|--------|-----------------|---------------|---------------|---------------|
| web-app | package.json + framework markers | standard | Full (lint, test, build, a11y) | Build+ default |
| library | package.json with main/exports | standard | Full + API docs check | Build+ default |
| cli-tool | bin field in package.json | standard | ShellCheck, test | Build+ default |
| content-site | CMS markers, wp-config.php | fast | Lighthouse, SEO | Marketing for content tasks |
| infrastructure | Dockerfile, terraform/, ansible/ | standard | ShellCheck, security scan | Build+ default |
| agent | AGENTS.md, .agents/ | thinking | Agent review, prompt quality | Build+ default |
Resolution priority: Explicit bundle field in repos.json > .aidevops.json project config > auto-detection from marker files.
CLI:
bundle-helper.sh detect <repo-path> # Auto-detect bundle type
bundle-helper.sh resolve <repo-path> # Show resolved config (with overrides)
bundle-helper.sh show <bundle-name> # Show bundle defaults
bundle-helper.sh list # List all available bundlesSee: .agents/bundles/ for bundle definitions, .agents/scripts/bundle-helper.sh for the CLI.
Parallel Agents & Headless Dispatch
Run multiple AI sessions concurrently with isolated contexts. Named runners provide persistent agent identities with their own instructions and memory.
| Feature | Description |
|---------|-------------|
| Headless dispatch | opencode run for one-shot tasks, opencode serve + --attach for warm server |
| Runners | Named agent instances with per-runner AGENTS.md, config, and run logs (runner-helper.sh) |
| Session management | Resume sessions with -s <id> or -c, fork with SDK |
| Memory namespaces | Per-runner memory isolation with shared access when needed |
| SDK orchestration | @opencode-ai/sdk for TypeScript parallel dispatch via Promise.all |
| Matrix integration | Chat-triggered dispatch via self-hosted Matrix (optional) |
# Create a named runner
runner-helper.sh create code-reviewer --description "Reviews code for security and quality"
# Dispatch a task (one-shot)
runner-helper.sh run code-reviewer "Review src/auth/ for vulnerabilities"
# Dispatch against warm server (faster, no MCP cold boot)
opencode serve --port 4096 &
runner-helper.sh run code-reviewer "Review src/auth/" --attach http://localhost:4096
# Parallel dispatch via CLI
opencode run --attach http://localhost:4096 --title "Review" "Review src/auth/" &
opencode run --attach http://localhost:4096 --title "Tests" "Generate tests for src/utils/" &
wait
# List runners and status
runner-helper.sh list
runner-helper.sh status code-reviewerArchitecture:
OpenCode Server (opencode serve)
├── Session 1 (runner/code-reviewer)
├── Session 2 (runner/seo-analyst)
└── Session 3 (scheduled-task)
↑
HTTP API / SSE Events
↑
┌────────┴────────┐
│ Dispatch Layer │ ← runner-helper.sh, cron, Matrix bot, SDK
└─────────────────┘Example runner templates: code-reviewer, seo-analyst - copy and customize for your own runners.
Matrix bot dispatch (optional): Bridge Matrix chat rooms to runners for chat-triggered AI. Each room maintains persistent conversation context via SQLite -- on idle timeout, the session is compacted (summarised) and stored, so the next message resumes with full context.
# Setup Matrix bot (interactive wizard)
matrix-dispatch-helper.sh setup
# Map rooms to runners (each room = separate session)
matrix-dispatch-helper.sh map '!dev-room:server' code-reviewer
matrix-dispatch-helper.sh map '!seo-room:server' seo-analyst
# Start bot (daemon mode)
matrix-dispatch-helper.sh start --daemon
# In Matrix room: "!ai Review src/auth.ts for security issues"
# Manage sessions
matrix-dispatch-helper.sh sessions list
matrix-dispatch-helper.sh sessions statsSee: headless-dispatch.md for full documentation including parallel vs sequential decision guide, SDK examples, CI/CD integration, and custom agent configuration. matrix-bot.md for Matrix bot setup including Cloudron Synapse guide and session persistence.
Self-Improving Agent System
Agents that learn from experience and contribute improvements:
| Phase | Description |
|-------|-------------|
| Review | Analyze memory for success/failure patterns (memory-helper.sh) |
| Refine | Generate and apply improvements to agents |
| Test | Validate in isolated OpenCode sessions |
| PR | Contribute to community with privacy filtering |
Safety guardrails:
- Worktree isolation for all changes
- Human approval required for PRs
- Mandatory privacy filter (secretlint + pattern redaction)
- Dry-run default, explicit opt-in for PR creation
- Audit log to memory
Agent Testing Framework
Test agent behavior through isolated AI sessions with automated validation:
# Create a test suite
agent-test-helper.sh create my-tests
# Run tests (auto-detects claude or opencode CLI)
agent-test-helper.sh run my-tests
# Quick single-prompt test
agent-test-helper.sh run-one "What tools do you have?" --expect "bash"
# Before/after comparison for agent changes
agent-test-helper.sh baseline my-tests # Save current behavior
# ... modify agents ...
agent-test-helper.sh compare my-tests # Detect regressionsTest suites are JSON files with prompts and validation rules (expect_contains, expect_not_contains, expect_regex, min_length, max_length). Results are saved for historical tracking.
See: agent-testing.md subagent for full documentation and example test suites.
Voice Bridge - Talk to Your AI Agent
Speak naturally to your AI coding agent and hear it respond. The voice bridge connects your microphone to OpenCode via a fast local pipeline -- ask questions, give instructions, execute tasks, all by voice.
Mic → Silero VAD → Whisper MLX (1.4s) → OpenCode (4-6s) → Edge TTS (0.4s) → SpeakerRound-trip: ~6-8 seconds on Apple Silicon. The agent can edit files, run commands, create PRs, and confirm what it did -- all via voice.
Quick start:
# Start a voice conversation (installs deps automatically)
voice-helper.sh talk
# Choose engines and voice
voice-helper.sh talk whisper-mlx edge-tts en-GB-SoniaNeural
voice-helper.sh talk whisper-mlx macos-say # Offline mode
# Utilities
voice-helper.sh devices # List audio input/output devices
voice-helper.sh voices # List available TTS voices
voice-helper.sh benchmark # Test STT/TTS/LLM speeds
voice-helper.sh status # Check component availabilityFeatures:
| Feature | Details | |---------|---------| | Swappable STT | whisper-mlx (fastest on Apple Silicon), faster-whisper (CPU) | | Swappable TTS | edge-tts (best quality), macos-say (offline), facebookMMS (local) | | Voice exit | Say "that's all", "goodbye", "all for now" to end naturally | | STT correction | LLM sanity-checks transcription errors before acting (e.g. "test.txte" → "test.txt") | | Task execution | Full tool access -- edit files, git operations, run commands | | Session handback | Conversation transcript output on exit for calling agent context | | TUI compatible | Graceful degradation when launched from AI tool's Bash (no tty) |
How it works: The bridge uses opencode run --attach to connect to a running OpenCode server for low-latency responses (~4-6s vs ~30s cold start). It automatically starts opencode serve if not already running.
Requirements: Apple Silicon Mac (for whisper-mlx), Python 3.10+, internet (for edge-tts). The voice helper installs Python dependencies automatically into the S2S venv.
Speech-to-Speech Pipeline (Advanced)
For advanced use cases (custom LLMs, server/client deployment, multi-language, phone integration), the full huggingface/speech-to-speech pipeline is also available:
speech-to-speech-helper.sh setup # Install pipeline
speech-to-speech-helper.sh start --local-mac # Run on Apple Silicon
speech-to-speech-helper.sh start --cuda # Run on NVIDIA GPU
speech-to-speech-helper.sh start --server # Server mode (remote clients)Supported languages: English, French, Spanish, Chinese, Japanese, Korean (auto-detect or fixed).
Additional voice methods:
| Method | Description | |--------|-------------| | VoiceInk + Shortcut | macOS: transcription → OpenCode API → response | | iPhone Shortcut | iOS: dictate → HTTP → speak response | | Pipecat STS | Full voice pipeline: Soniox STT → AI → Cartesia TTS |
See: speech-to-speech.md for full component options, CLI parameters, and integration patterns (Twilio phone, video narration, voice-driven DevOps).
Scheduled Agent Tasks
Cron-based agent dispatch for automated workflows:
# Example: Daily SEO report at 9am
0 9 * * * ~/.aidevops/agents/scripts/runner-helper.sh run "seo-analyst" "Generate daily SEO report"See: [
