hari-seldon
v3.0.1
Published
Multi-Agent AI Orchestration with Git Worktree Isolation for Claude Code
Maintainers
Readme
Hari Seldon
Multi-Agent AI Orchestration with Git Worktree Isolation
Hari Seldon extends Claude Code with multi-provider AI orchestration and git worktree isolation. Each coding agent receives its own isolated worktree - a separate checkout on a dedicated branch - enabling true parallel development without file conflicts. Get second opinions from external AI models without leaving Claude Code.
Features
- Zero-Config Setup - Auto-registers with Claude Code on npm install
- 13+ AI Providers - From budget (DeepSeek, Groq) to premium (OpenAI, Anthropic)
- Multi-Provider AI Orchestration - Route tasks to Anthropic, OpenAI, Google Gemini, DeepSeek, Z.AI (GLM-4), Kimi, or local Ollama models
- Git Worktree Isolation - Each coding agent works in an isolated directory on its own branch
- Pipeline Execution - Multi-step DAG workflows with dependency ordering and parallel execution
- Merge Conflict Detection - Automatic detection with 4 resolution strategies (ours, theirs, auto, manual)
- Resource Management - Configurable limits on worktree count, disk usage, and stale detection
- Worktree Pooling - Pre-warmed worktrees for zero-latency allocation
- State Persistence - Survives server restarts with automatic state recovery
- Crash Recovery - Orphan detection, stuck worktree handling, and cleanup commands
- Intelligent Failover - Automatic provider switching with health tracking, circuit breakers, and cost-aware routing
- Context Manager - Token tracking per provider/model with 40+ model limits, auto-compact at 90%, blocking at 98%
- Hooks System - 9 event types for customizing agent behavior with variable substitution and hot-reload
- Background Task Runner - Queue-based async task execution with AbortController cancellation and progress tracking
- MCP Auto Mode - Intelligent tool selection under context pressure with priority-based deferral
- Skills Hot-Reload - Custom skill definitions with auto-discovery from user and project directories
Quick Start
Option 1: Claude Code Plugin (Recommended)
Install as a Claude Code plugin for slash commands and auto-invoked skills:
# Add the marketplace
claude plugin marketplace add github:sashabogi/hari-seldon
# Install the plugin
claude plugin install hari-seldonRestart Claude Code, then use:
/hari-seldon:invoke critic "Review my authentication approach"
/hari-seldon:review
/hari-seldon:statusOption 2: MCP Server Only
Install as an MCP server (no slash commands, just MCP tools):
npm install -g hari-seldonHari Seldon automatically registers with Claude Code during installation. Restart Claude Code to activate.
To verify: Run /mcp in Claude Code - you should see hari-seldon · ✓ connected
Initial Setup
hari-seldon setupThis interactive wizard will:
- Select which providers you have API keys for
- Configure each provider and test connections
- Set up agent roles
- Create your configuration file
Plugin Commands
When installed as a Claude Code plugin, these slash commands are available:
| Command | Description | Example |
|---------|-------------|---------|
| /hari-seldon:invoke | Invoke an agent by role | /hari-seldon:invoke critic "Review this plan" |
| /hari-seldon:review | Get external code review | /hari-seldon:review src/auth.ts |
| /hari-seldon:critique | Get plan/architecture critique | /hari-seldon:critique |
| /hari-seldon:compare | Compare multiple agents | /hari-seldon:compare coder,reviewer "Implement auth" |
| /hari-seldon:status | Show provider health | /hari-seldon:status |
| /hari-seldon:setup | Configure providers | /hari-seldon:setup |
| /hari-seldon:pipeline | Run multi-agent pipeline | /hari-seldon:pipeline |
| /hari-seldon:worktree | Manage git worktrees | /hari-seldon:worktree list |
Auto-Invoked Skills
The plugin also includes context-aware skills that Claude invokes automatically:
| Skill | Triggers When |
|-------|---------------|
| second-opinion | Making architectural decisions or choosing approaches |
| code-review-suggest | After significant code changes (50+ lines) |
| failover-aware | Provider errors occur (explains what happened) |
| multi-agent-pipeline | Complex multi-step tasks are described |
| session-status | Session starts (shows provider health) |
How It Works
Your Request
|
v
+----------------------------------------------------------+
| Claude Code (MCP Client) |
+----------------------------------------------------------+
|
stdio / MCP
|
v
+----------------------------------------------------------+
| Hari Seldon MCP Server |
| +----------------------------------------------------+ |
| | State Coordinator | |
| | (Task state machine, persistence, recovery) | |
| +----------------------------------------------------+ |
| | | |
| v v |
| +-------------+ +------------------+ |
| | Task | | Worktree | |
| | Coordinator | | Manager | |
| +-------------+ +------------------+ |
| | | |
| v v |
| +-------------+ +------------------+ |
| | Pipeline | | Branch | |
| | Manager | | Manager | |
| +-------------+ +------------------+ |
+----------------------------------------------------------+
| |
v v
+------------------+ .hari-seldon/worktrees/
| External AI APIs | +--------+ +--------+ +--------+
| (GPT-4o, Gemini, | | task-1 | | task-2 | | task-3 |
| DeepSeek, etc.) | | branch | | branch | | branch |
+------------------+ +--------+ +--------+ +--------+Worktree Flow
- Task received - Claude Code sends a coding task via MCP
- Worktree allocated - Hari Seldon creates an isolated git worktree with a dedicated branch
- Agent works - The AI agent makes changes in isolation without affecting other agents
- Merge back - Changes are merged via PR or direct merge, with conflict detection
Intelligent Failover System
Hari Seldon includes automatic provider failover with health tracking:
┌─────────────────────────────────────────────────────────────┐
│ Request to Role │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────┐
│ Primary Provider │
│ (e.g. DeepSeek)│
└────────┬────────┘
│
┌──────────────┴──────────────┐
│ │
✓ Success ✗ Error (429, 500, etc.)
│ │
▼ ▼
Return Result ┌─────────────────┐
│ Health Tracker │
│ marks unhealthy │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Circuit Breaker │
│ checks state │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Fallback Chain │
│ (OpenAI → Anthropic)│
└────────┬────────┘
│
▼
Next ProviderKey Features
- Automatic Retry - Exponential backoff on transient errors
- Health Tracking - Monitors provider success/failure rates
- Cooldown Periods - Unhealthy providers are temporarily skipped
- Circuit Breakers - Prevents cascading failures
- Cost-Aware Routing - Optionally prefer cheaper providers
- Configurable Error Codes - Define which errors trigger failover
- Rate Limit Warnings - Proactive warnings at 70% capacity with header parsing for OpenAI/Anthropic/generic formats
Session Mode (Claude Code Users)
When running inside Claude Code, Hari Seldon can delegate Anthropic-based roles back to the current session—no subprocess, no API key, no additional cost.
Quick Setup
# ~/.config/hari-seldon/config.yaml
providers:
anthropic:
access_mode: session # or 'auto' for flexibilityHow It Works
Instead of spawning a subprocess or making API calls, Hari Seldon returns a delegation response that the current Claude session handles directly:
- orchestrator, reviewer, researcher roles → Handled by current session
- critic, coder, designer roles → External API calls (OpenAI, Gemini, etc.)
Benefits
| Without Session Mode | With Session Mode | |---------------------|-------------------| | 2-5s subprocess startup | ~0ms direct execution | | Message structure lost | Full context preserved | | Uses subscription quota | No additional cost |
See Session Delegation Architecture for complete documentation.
Hooks System
Hari Seldon provides a flexible hooks system that lets you customize agent behavior by running commands at key lifecycle events.
Supported Events
| Event | Description |
|-------|-------------|
| PreToolUse | Before a tool is invoked |
| PostToolUse | After a tool completes |
| Setup | During agent initialization |
| ToolError | When a tool encounters an error |
| TaskStart | When a task begins execution |
| TaskComplete | When a task finishes successfully |
| TaskFail | When a task fails |
| ProviderError | When an AI provider returns an error |
| Failover | When switching to a fallback provider |
Configuration
# In ~/.config/hari-seldon/config.yaml
hooks:
enabled: true
configPath: ~/.hari-seldon/hooks.yaml
events:
PreToolUse:
- command: "echo 'Tool: ${TOOL_NAME}'"
timeout_ms: 5000
filter:
tools: ["execute_task", "invoke_agent"]
TaskComplete:
- command: "./notify.sh ${TASK_ID}"
timeout_ms: 10000
ProviderError:
- command: "logger -t hari-seldon 'Provider ${PROVIDER} failed: ${ERROR}'"Variable Substitution
Hooks support variable substitution with context-specific values:
${TOOL_NAME}- Name of the tool being invoked${TASK_ID}- Current task identifier${PROVIDER}- AI provider name${ERROR}- Error message (for error events)${RESULT}- Result data (for post-completion events)
Features
- Filtering - Run hooks only for specific tools, providers, or roles
- Timeouts - Configurable per-hook timeouts prevent blocking
- Hot-Reload - Changes to hooks.yaml are picked up automatically
Skills Hot-Reload
Define custom skills that extend agent capabilities with automatic discovery and hot-reload.
Skill Locations
Skills are loaded from:
~/.hari-seldon/skills/- User-level skills (shared across projects).hari-seldon/skills/- Project-level skills (project-specific)
Skill Definition
# ~/.hari-seldon/skills/code-review.yaml
name: code-review
description: Comprehensive code review skill
context_mode: inherit # inherit | fork | isolated
prompts:
system: |
You are an expert code reviewer focusing on:
- Security vulnerabilities
- Performance issues
- Best practices
- Code clarity
triggers:
- pattern: "review this code"
- pattern: "code review"
actions:
- type: invoke_agent
role: reviewerContext Modes
| Mode | Description |
|------|-------------|
| inherit | Skill runs in the parent agent's context |
| fork | Skill gets a copy of the parent context |
| isolated | Skill runs with a fresh, empty context |
Features
- Auto-Discovery - Skills are automatically loaded on startup
- Hot-Reload - Changes to skill files are picked up without restart
- Priority Ordering - Project skills override user skills with the same name
Context Manager
Hari Seldon includes intelligent context management to prevent token limit issues and optimize AI provider usage.
Features
- Token Tracking - Tracks token usage per provider/model with 40+ pre-configured model limits
- Auto-Compact - Automatically compacts context when reaching 90% capacity
- Blocking Threshold - Blocks new requests at 98% to prevent errors
- Visualization - Context usage visible in debug logs and status commands
Configuration
# In ~/.config/hari-seldon/config.yaml
context:
autoCompact: true
compactThreshold: 0.9 # Trigger compaction at 90%
blockThreshold: 0.98 # Block new requests at 98%Supported Model Limits
Pre-configured limits for 40+ models including:
- OpenAI: GPT-4o (128K), GPT-4 Turbo (128K), o1 (200K)
- Anthropic: Claude 3.5/4 (200K)
- Google: Gemini 2.5 Pro (1M), Gemini 2.5 Flash (1M)
- DeepSeek: DeepSeek R1 (64K), DeepSeek Chat (128K)
- And many more...
MCP Auto Mode
When context pressure builds, MCP Auto Mode intelligently manages tool availability.
How It Works
- Context Budget - Tool descriptions are deferred when exceeding 10% of context budget
- Priority-Based Selection - High-priority tools remain available; lower-priority tools are deferred
- Automatic Restoration - Deferred tools become available again when context decreases
Tool Priorities
| Priority | Tools |
|----------|-------|
| Critical | invoke_agent, execute_task |
| High | list_agents, get_task_status |
| Medium | execute_pipeline, compare_agents |
| Low | cleanup_worktrees, list_worktrees |
MCP Tools
| Tool | Description |
|------|-------------|
| invoke_agent | Invoke a specialized agent by role with optional context |
| compare_agents | Run the same task through multiple agents and compare responses |
| critique_plan | Get critical feedback on plans/PRDs from a skeptical architect |
| review_code | Get code review feedback on code snippets or files |
| design_feedback | Get UI/UX design feedback on components, layouts, or flows |
| list_agents | List all available agent roles and their configurations |
| execute_task | Execute a coding task in an isolated worktree |
| execute_pipeline | Run multi-step DAG workflows with dependencies |
| get_pipeline_status | Query pipeline execution status and step results |
| claim_next_task | Claim available tasks from the queue (for worker agents) |
| list_worktrees | List all active worktrees with status and metadata |
| cleanup_worktrees | Clean up stale or orphaned worktrees |
| resolve_conflicts | Resolve merge conflicts using various strategies |
| get_worktree_status | Get detailed status of a specific worktree |
| delete_task | Delete a task from the queue |
Configuration
Location: ~/.config/hari-seldon/config.yaml
version: "1.0"
defaults:
temperature: 0.7
max_tokens: 4096
timeout_ms: 60000
providers:
openai:
api_key: ${OPENAI_API_KEY}
default_model: gpt-4o
anthropic:
api_key: ${ANTHROPIC_API_KEY}
default_model: claude-sonnet-4-20250514
deepseek:
api_key: ${DEEPSEEK_API_KEY}
default_model: deepseek-reasoner
google:
api_key: ${GOOGLE_API_KEY}
default_model: gemini-2.5-pro
groq:
api_key: ${GROQ_API_KEY}
default_model: llama-3.3-70b-versatile
openrouter:
api_key: ${OPENROUTER_API_KEY}
default_model: anthropic/claude-3.5-sonnet
ollama:
base_url: http://localhost:11434
default_model: llama3.3:70b
roles:
coder:
provider: deepseek
model: deepseek-reasoner
needs_worktree: true
system_prompt: |
You are an expert software engineer...
# Failover chain: automatically try next provider on failure
fallback_chain:
providers:
- provider: openai
model: gpt-4o
- provider: anthropic
model: claude-sonnet-4-20250514
on_errors: [429, 500, 502, 503, 504]
retry:
max_attempts: 2
initial_delay_ms: 1000
max_delay_ms: 30000
strategy: exponential
critic:
provider: deepseek
model: deepseek-reasoner
temperature: 0.3
system_prompt: |
You are a skeptical senior architect...
reviewer:
provider: openai
model: gpt-4o
system_prompt: |
You are a code review expert...
designer:
provider: google
model: gemini-2.5-pro
system_prompt: |
You are a UI/UX specialist...
tasks:
enabled: true
queue:
maxSize: 100
priorityLevels: 5
execution:
maxConcurrent: 5
defaultTimeout: 300000
worktrees:
enabled: true
baseDir: .hari-seldon/worktrees
limits:
maxWorktrees: 10
maxPerTask: 3
maxDiskUsageMB: 5000
cleanup:
onSuccess: true
onFailure: false
staleAfterHours: 24
autoCleanup: true
git:
defaultBaseBranch: main
branchPattern: task/{taskId}
autoMerge: false
createPR: true
pool:
enabled: true
minAvailable: 2
maxSize: 5
# Hooks configuration
hooks:
enabled: true
configPath: ~/.hari-seldon/hooks.yaml
events:
PreToolUse:
- command: "echo 'Tool: ${TOOL_NAME}'"
timeout_ms: 5000
TaskComplete:
- command: "./notify.sh ${TASK_ID}"
# Skills configuration
skills:
enabled: true
directories:
- ~/.hari-seldon/skills
- .hari-seldon/skills
hotReload: true
# Context management
context:
autoCompact: true
compactThreshold: 0.9 # Auto-compact at 90% capacity
blockThreshold: 0.98 # Block new requests at 98% capacityEnvironment Variables
# Premium Providers
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="..."
# Budget-Friendly Providers
export DEEPSEEK_API_KEY="sk-..."
export ZAI_API_KEY="..."
export KIMI_API_KEY="..."
export KIMI_CODE_API_KEY="..." # Kimi Code subscription
# Router/Aggregator
export OPENROUTER_API_KEY="sk-or-..."
# Fast Inference
export GROQ_API_KEY="gsk_..."
export TOGETHER_API_KEY="..."
export FIREWORKS_API_KEY="..."
# Web Search
export PERPLEXITY_API_KEY="pplx-..."
# Debug/Observability
export HARI_SELDON_DEBUG=true # Enable debug logging
export HARI_SELDON_DEBUG_FILE=/path/to/debug.log # Write debug logs to file
export HARI_SELDON_DEBUG_VERBOSE=true # Include verbose detailsCLI Commands
# Interactive setup wizard (recommended for first run)
hari-seldon setup
# Start the MCP server
hari-seldon start
# Check current state and recovery status
hari-seldon status
# Run full recovery process
hari-seldon recover
hari-seldon recover --dry-run # Preview changes
hari-seldon recover --auto-cleanup # Auto-clean orphans
# Manage orphaned resources
hari-seldon orphans list
hari-seldon orphans cleanup
hari-seldon orphans cleanup --force
# Export/import persisted state
hari-seldon state export backup.json
hari-seldon state import backup.json
# Provider management
hari-seldon provider add openai
hari-seldon provider test # Test all providers
hari-seldon provider test deepseek # Test specific provider
hari-seldon provider list
# Project-level overrides (stored in .hari-seldon.yaml)
hari-seldon project show # Show current overrides
hari-seldon project set-coder deepseek:deepseek-reasoner # Set coder provider
hari-seldon project set-fallback coder openai:gpt-4o,anthropic:claude-sonnet-4-20250514
hari-seldon project reset --force # Remove all overrides
# Other commands
hari-seldon init # Create default config file
hari-seldon list-roles # List available agent roles
hari-seldon validate # Validate configuration
hari-seldon version # Show version
hari-seldon help # Show helpSupported Providers
| Provider | Models | API Key Env Var | Cost |
|----------|--------|-----------------|------|
| Anthropic | Claude Opus 4, Claude Sonnet 4 | ANTHROPIC_API_KEY or session mode | $$$ (free in session mode) |
| OpenAI | GPT-4o, GPT-4 Turbo, o1, o1-mini, o3-mini | OPENAI_API_KEY | $$$ |
| Google Gemini | Gemini 2.5 Pro, Gemini 2.5 Flash | GOOGLE_API_KEY | $$ |
| DeepSeek | DeepSeek R1, DeepSeek Chat, DeepSeek Coder | DEEPSEEK_API_KEY | $ |
| OpenRouter | Access 100+ models via single API | OPENROUTER_API_KEY | Varies |
| Perplexity | pplx-70b-online, pplx-7b-online (web search) | PERPLEXITY_API_KEY | $$ |
| Groq | Llama 3.3 70B, Mixtral (ultra-fast inference) | GROQ_API_KEY | $ |
| Together AI | Llama, Mistral, CodeLlama, Qwen | TOGETHER_API_KEY | $ |
| Fireworks AI | Llama, Mixtral (optimized inference) | FIREWORKS_API_KEY | $ |
| Z.AI | GLM-4, GLM-4.7 | ZAI_API_KEY | $ |
| Kimi (Moonshot) | moonshot-v1-128k | KIMI_API_KEY | $ |
| Kimi Code | Kimi coding assistant (subscription) | KIMI_CODE_API_KEY | Subscription |
| Ollama | Llama 3.3, Mistral, CodeLlama, Qwen, etc. | N/A (local) | Free |
Cost Legend: $ = Budget, $$ = Moderate, $$$ = Premium
Agent Roles
| Role | Purpose | Recommended Provider |
|------|---------|---------------------|
| orchestrator | Task synthesis, planning, document improvement | Claude Opus 4 |
| coder | Code generation and implementation (uses worktrees) | GPT-4o, Claude Sonnet |
| critic | Challenge plans, find flaws, identify risks | DeepSeek R1, o1 |
| reviewer | Code review - bugs, security, performance, best practices | GPT-4o |
| designer | UI/UX feedback, accessibility, user flows | Gemini 2.5 Pro |
| researcher | Fact-finding, research, information gathering | Gemini 2.5 Pro |
Usage Examples
Simple: Invoke an Agent
invoke_agent({
role: "reviewer",
task: "Review this authentication implementation for security issues",
context: "Using JWT with refresh tokens"
})Worktree: Isolated Coding Task
execute_task({
role: "coder",
task: "Implement user authentication with JWT",
useWorktree: true,
baseBranch: "main"
})Pipeline: Multi-Step Workflow
execute_pipeline({
name: "feature-development",
steps: [
{ name: "design", role: "designer", subject: "Design auth flow UI" },
{ name: "implement", role: "coder", subject: "Implement auth flow", dependsOn: ["design"] },
{ name: "review", role: "reviewer", subject: "Review implementation", dependsOn: ["implement"] },
{ name: "critique", role: "critic", subject: "Security review", dependsOn: ["implement"] }
]
})Parallel: Compare Multiple Agents
compare_agents({
roles: ["critic", "reviewer", "designer"],
task: "Review this architectural decision for a microservices migration..."
})Architecture
See docs/ARCHITECTURE.md for detailed system architecture, data flow diagrams, component descriptions, and phase implementation details.
hari-seldon/
├── src/
│ ├── index.ts # Entry point
│ ├── server.ts # MCP server setup
│ ├── cli.ts # CLI entry point
│ ├── startup.ts # Recovery and initialization
│ ├── types.ts # TypeScript type definitions
│ ├── cli/ # CLI command implementations
│ ├── mcp/
│ │ ├── tools/ # MCP tool implementations
│ │ ├── transport/ # stdio transport
│ │ └── auto-mode.ts # MCP Auto Mode - intelligent tool selection
│ ├── worktrees/
│ │ ├── manager.ts # Worktree lifecycle
│ │ ├── branch.ts # Branch operations
│ │ └── isolation.ts # Environment isolation
│ ├── tasks/
│ │ ├── coordinator.ts # Task dispatch
│ │ ├── state.ts # State machine
│ │ └── pipeline.ts # Pipeline execution
│ ├── providers/
│ │ ├── base.ts # Provider interface
│ │ ├── anthropic.ts # Anthropic Claude
│ │ ├── openai.ts # OpenAI GPT-4o
│ │ ├── gemini.ts # Google Gemini
│ │ ├── deepseek.ts # DeepSeek R1
│ │ ├── ollama.ts # Local Ollama
│ │ ├── zai.ts # Z.AI GLM
│ │ ├── kimi.ts # Moonshot Kimi
│ │ ├── openrouter.ts # OpenRouter (100+ models)
│ │ ├── perplexity.ts # Perplexity (web search)
│ │ ├── groq.ts # Groq (fast inference)
│ │ ├── together.ts # Together AI
│ │ └── fireworks.ts # Fireworks AI
│ ├── failover/
│ │ ├── orchestrator.ts # Retry and failover logic
│ │ ├── health-tracker.ts # Provider health monitoring
│ │ └── pricing.ts # Cost-aware routing
│ ├── background/ # Background task execution
│ │ └── task-runner.ts # Queue-based async task runner
│ ├── hooks/ # Hook system
│ │ ├── hook-executor.ts # Hook command execution
│ │ └── hook-manager.ts # Hook registration and dispatch
│ ├── skills/ # Skills hot-reload
│ │ ├── skill-loader.ts # Skill file discovery and parsing
│ │ ├── skill-executor.ts # Skill action execution
│ │ └── hot-reloader.ts # File watcher for hot-reload
│ ├── observability/ # Logging and debugging
│ │ ├── logger.ts # Structured logging
│ │ └── debug-logger.ts # Debug output with environment control
│ ├── persistence/ # State persistence
│ ├── config/ # Configuration management
│ └── router/ # Routing engine
├── tests/ # Test suites
├── docs/ # Documentation
└── config/ # Config schemasDevelopment
# Install dependencies
pnpm install
# Build TypeScript
pnpm build
# Development with watch mode
pnpm dev
# Run tests
pnpm test # Watch mode
pnpm test:run # Single run
# Lint code
pnpm lint
pnpm lint:fix
# Type check
pnpm typecheckDebug Logging
Hari Seldon provides enhanced debug logging controlled via environment variables:
# Enable debug logging
export HARI_SELDON_DEBUG=true
# Write debug logs to a file
export HARI_SELDON_DEBUG_FILE=/path/to/debug.log
# Include verbose details (API payloads, full stack traces)
export HARI_SELDON_DEBUG_VERBOSE=trueDebug logging includes:
- Provider API calls and responses
- Failover decisions and health tracking
- Hook execution and timing
- Skill loading and hot-reload events
- Context manager token tracking
Security
- API keys masked - Keys are never logged or exposed in responses
- Environment variable interpolation - Use
${VAR_NAME}syntax in config files - Local Ollama option - Run models locally for maximum privacy
- Worktree isolation - Each agent works in a separate directory, preventing cross-contamination
- State persistence - Sensitive data excluded from persisted state
Documentation
- Architecture - System architecture and design decisions
- CLAUDE.md - AI assistant instructions for working with this codebase
Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Anthropic for Claude and the MCP protocol
- Model Context Protocol for the MCP SDK
- All the AI providers powering the multi-agent capabilities
Built with care by Sasha Bogojevic
