npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

agent-link-mcp

v0.5.0

Published

MCP server for bidirectional AI agent collaboration — spawn and communicate with any AI coding agent CLI

Downloads

1,211

Readme

agent-link-mcp

npm version License: MIT

English | 한국어

MCP server for bidirectional AI agent collaboration. Spawn and communicate with any AI coding agent CLI — Claude Code, Codex, Gemini, Aider, and more.

When to Use

  • Stuck on a bug? — Your agent tried twice and failed. Let it ask another agent for a fresh perspective.
  • Need a second opinion? — Get code review or architectural advice from a different AI model.
  • Cross-model strengths — Use Claude for planning, Codex for execution, Gemini for research.
  • Parallel work — Spawn multiple agents to tackle independent subtasks simultaneously.
  • Rubber duck debugging — Have one agent explain the problem to another and get back a solution.

Use Cases

Get Help When Stuck

Your primary agent keeps failing on the same issue? Ask another agent:

# Claude Code is stuck on a TypeScript error it can't resolve.
# It spawns Codex for a second opinion:

spawn_agent("codex", "This TypeScript error keeps appearing. How do I fix it?", {
  error: "Type 'string' is not assignable to type 'number'",
  files: ["src/utils.ts"]
})

Cross-Agent Code Review

Have another model review your agent's code changes:

spawn_agent("claude", "Review these changes for bugs and edge cases", {
  files: ["src/api.ts", "src/handler.ts"],
  intent: "Code review before merge"
})

Multi-Agent Pipeline

Build a pipeline where agents handle different stages:

# Agent 1: Research
spawn_agent("gemini", "Find the best approach for WebSocket reconnection")

# Agent 2: Implementation (using Agent 1's advice)
spawn_agent("codex", "Implement WebSocket reconnection with exponential backoff", {
  files: ["src/ws-client.ts"]
})

# Agent 3: Review
spawn_agent("claude", "Review this implementation for production readiness", {
  files: ["src/ws-client.ts"]
})

Bidirectional Collaboration

Agents can ask questions back. The host answers, and work continues:

Host: spawn_agent("codex", "Add caching to the API layer")
Codex: [QUESTION] Should I use Redis or in-memory cache?
Host: reply("codex-a1b2c3", "Use Redis, we have it in our docker-compose")
Codex: [RESULT] Added Redis caching with 5-minute TTL...

Why

AI coding agents get stuck sometimes. Instead of waiting for you, they can ask another agent for help. agent-link-mcp lets any MCP-compatible agent spawn other agent CLIs as collaborators, exchange questions, and get results back — all through standard MCP tools.

  • One-side install — only the host agent needs this MCP server. Spawned agents are just CLI subprocesses.
  • Bidirectional — the host can ask questions to the spawned agent, and the spawned agent can ask questions back.
  • Any agent — works with any CLI that accepts a prompt and returns text. Built-in profiles for Claude, Codex, Gemini, and Aider.
  • Multi-agent — spawn multiple agents simultaneously for parallel collaboration.

Prerequisites

agent-link-mcp spawns other AI agents as CLI subprocesses. You need to install and authenticate the agent CLIs you want to collaborate with:

| Agent | Install | Auth | |-------|---------|------| | Claude Code | npm install -g @anthropic-ai/claude-code | claude login | | Codex | npm install -g @openai/codex | codex login | | Gemini CLI | npm install -g @anthropic-ai/gemini-cli | gemini login | | Aider | pip install aider-chat | Set OPENAI_API_KEY or ANTHROPIC_API_KEY |

You only need the ones you plan to use. agent-link-mcp auto-detects which CLIs are installed.

Install

# Claude Code
claude mcp add agent-link npx agent-link-mcp

# Codex
codex mcp add agent-link npx agent-link-mcp

# Any MCP client
npx agent-link-mcp

Note: Only the agent you're working in needs this MCP server installed. The other agents are spawned as subprocesses — they don't need agent-link-mcp.

Tools

spawn_agent

Spawn an agent and send it a task.

{
  "agent": "codex",
  "task": "Refactor this function for better performance",
  "context": {
    "files": ["src/utils.ts"],
    "error": "TypeError: Cannot read property 'x' of undefined",
    "intent": "Performance improvement"
  },
  "model": "o3",
  "timeoutMs": 7200000
}

| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | agent | string | required | Agent name ("claude", "codex", "gemini", "aider") | | task | string | required | Task description | | context | object | — | Optional { files, error, intent, diff }. diff: true includes git diff output. diff: "staged" for staged only. | | cwd | string | cwd | Working directory for the agent process | | model | string | — | Model to use (e.g. "o3", "gpt-5.4", "claude-sonnet-4", "gemini-2.5-pro"). Passed via --model flag. | | thinking | string | — | Thinking/reasoning depth ("low", "medium", "high", "max"). Claude: --effort, Codex: -c reasoning_effort, Aider: --reasoning-effort. | | retry | boolean | false | Auto-retry on failure (up to 3 attempts). | | escalate | boolean | false | On retry, automatically increase thinking level. Requires retry: true. | | timeoutMs | number | 3600000 | Timeout in ms. Default: 1 hour. |

Returns one of:

  • { status: "done", agentId: "codex-a1b2c3", result: "..." } — task completed
  • { status: "waiting_for_reply", agentId: "codex-a1b2c3", question: "..." } — agent needs clarification
  • { error: "...", agentId: "codex-a1b2c3" } — something went wrong

spawn_agents

Run multiple agents in parallel. Returns all results together.

{
  "agents": [
    { "agent": "codex", "task": "Review for bugs", "context": { "diff": true } },
    { "agent": "claude", "task": "Review for security", "context": { "diff": true } }
  ],
  "cwd": "/path/to/project"
}

Returns { summary: { total, succeeded, failed, waiting }, results: [...] }.

reply

Answer a spawned agent's question and continue the conversation.

{
  "agentId": "codex-a1b2c3",
  "message": "Yes, you can remove the side effects"
}

kill_agent

Abort a running agent session.

{
  "agentId": "codex-a1b2c3"
}

list_agents

List available agent CLIs.

{
  "agents": [
    { "name": "claude", "command": "claude", "source": "auto", "available": true },
    { "name": "codex", "command": "codex", "source": "auto", "available": true },
    { "name": "gemini", "command": "gemini", "source": "auto", "available": false }
  ]
}

get_status

Get active agent sessions.

{
  "sessions": [
    { "agentId": "codex-a1b2c3", "agent": "codex", "status": "waiting_for_reply", "startedAt": "..." }
  ]
}

How It Works

You (using Claude Code)
  ↓
"Ask Codex to help with this refactoring"
  ↓
Claude Code → spawn_agent("codex", task, context)
  ↓
agent-link-mcp server → spawns `codex` CLI as subprocess
  ↓
Codex processes the task...
  ↓
Codex: "[QUESTION] Should I remove the side effects?"
  ↓
agent-link-mcp → parses response → returns to Claude Code
  ↓
Claude Code → reply("codex-a1b2c3", "Yes, remove them")
  ↓
agent-link-mcp → re-invokes Codex with accumulated context
  ↓
Codex: "[RESULT] Refactoring complete. Here's what I changed..."
  ↓
Claude Code receives the result and continues working

Configuration

Auto-detection

agent-link-mcp automatically detects installed agent CLIs:

| Agent | CLI Command | |-------|------------| | Claude Code | claude | | Codex | codex | | Gemini | gemini | | Aider | aider |

Custom agents

Add custom agents via config file at ~/.agent-link/config.json:

{
  "agents": {
    "codex": {
      "command": "/usr/local/bin/codex",
      "args": ["--full-auto"],
      "promptFlag": null,
      "outputFormat": "text"
    },
    "my-local-llm": {
      "command": "ollama",
      "args": ["run", "codellama"],
      "promptFlag": null,
      "outputFormat": "text"
    }
  }
}

Override config path with AGENT_LINK_CONFIG environment variable.

Model Selection

You can specify which model the spawned agent should use via the model parameter:

# Use a specific model for Codex
spawn_agent("codex", "Debug this issue", { model: "o3" })

# Use a specific model for Claude
spawn_agent("claude", "Review this code", { model: "claude-sonnet-4" })

The model name is passed to the agent CLI via its --model flag. If omitted, the agent uses its default model.

Thinking / Reasoning Depth

Control how deeply the agent reasons with the thinking parameter:

# High reasoning for complex debugging
spawn_agent("codex", "Debug this race condition", { thinking: "high" })

# Max effort for Claude
spawn_agent("claude", "Architect a new auth system", { thinking: "max" })

| Agent | Flag | Values | |-------|------|--------| | Claude | --effort | low, medium, high, max | | Codex | -c reasoning_effort | low, medium, high | | Aider | --reasoning-effort | low, medium, high |

If omitted, the agent uses its default reasoning level.

Timeout

Default timeout is 1 hour (3,600,000ms). You can override per-call:

# 2 hour timeout for complex tasks
spawn_agent("codex", "Refactor the entire auth system", { timeoutMs: 7200000 })

Conversation Protocol

Spawned agents receive instructions to format their responses:

  • [QUESTION] ... — needs clarification from the host agent
  • [RESULT] ... — task completed

If the agent doesn't follow the format, the entire output is treated as a result.

License

MIT