npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ted-crew

v1.2.1

Published

AI mesh network MCP server — Claude, Gemini, Codex as crew

Readme

Ted Crew MCP Server

Connect Claude Code, Gemini CLI, and Codex CLI as a single MCP server to form an AI mesh network. Each AI can call the others as crew members — self-calls are automatically blocked to prevent infinite loops.

         ┌── ask_codex ──→ Codex ──┐
Claude ──┤                         ├── ask_claude ──→ Claude
         └── ask_gemini ─→ Gemini ─┘
                             │
                             └── ask_codex ──→ Codex

한국어 README

Prerequisites

Installation

Claude Code (~/.claude/settings.json)

{
  "mcpServers": {
    "ted-crew": {
      "command": "npx",
      "args": ["-y", "ted-crew"]
    }
  }
}

Gemini CLI (~/.gemini/settings.json)

{
  "mcpServers": {
    "ted-crew": {
      "command": "npx",
      "args": ["-y", "ted-crew"],
      "env": { "TED_CREW_PROVIDER": "gemini" },
      "trust": true
    }
  }
}

Codex CLI (~/.codex/config.toml)

[mcp_servers.ted-crew]
command = "npx"
args = ["-y", "ted-crew"]
env = { "TED_CREW_PROVIDER" = "codex" }

Cursor (~/.cursor/mcp.json)

{
  "mcpServers": {
    "ted-crew": {
      "command": "npx",
      "args": ["-y", "ted-crew"]
    }
  }
}

Windsurf (~/.codeium/windsurf/mcp_config.json)

{
  "mcpServers": {
    "ted-crew": {
      "command": "npx",
      "args": ["-y", "ted-crew"],
      "disabled": false
    }
  }
}

Cline / VS Code Copilot

{
  "mcpServers": {
    "ted-crew": {
      "command": "npx",
      "args": ["-y", "ted-crew"]
    }
  }
}

Provider Configuration

Set TED_CREW_PROVIDER to identify the caller. The server automatically hides the tool matching the caller to prevent infinite loops.

| Provider | Hidden | Exposed Tools | |----------|--------|---------------| | claude (default) | ask_claude | ask_gemini, ask_codex, jobs | | gemini | ask_gemini | ask_codex, ask_claude, jobs | | codex | ask_codex | ask_gemini, ask_claude, jobs |

IDE-based clients (Cursor, Windsurf, Cline, etc.) can omit TED_CREW_PROVIDER — they'll default to claude.

Use Cases

1. Parallel Code Review

Two AIs review simultaneously from different angles:

ask_gemini(prompt: "Review from an architecture perspective", files: [...], background: true)
ask_codex(prompt: "Review for code quality and bugs", model: "gpt-5.3-codex", reasoning_effort: "high", background: true)
→ wait_job for both → synthesize results

2. Large Codebase Analysis → Implementation

Gemini's 1M token context maps the codebase, then Codex implements:

ask_gemini(prompt: "What files need to change for feature X?", directories: ["./src"], model: "gemini-2.5-pro")
→ feed results to
ask_codex(prompt: "Make the changes based on the analysis", model: "gpt-5.3-codex", reasoning_effort: "high", writable: true)

3. Second Opinion

Get a different perspective on the same problem:

ask_codex(prompt: "Why is this error happening? [error details]", model: "gpt-5.3-codex", reasoning_effort: "xhigh")

4. Documentation Pipeline

Draft with writing-focused Gemini → Claude refines with project context:

ask_gemini(prompt: "Read this code and write a README draft", files: ["./src/index.ts"])
→ Claude edits the draft to fit the project

5. Research → Implementation

Outsource research to preserve Claude's context window:

ask_gemini(prompt: "Summarize React Query v5 staleTime vs gcTime differences", model: "gemini-2.5-pro")
→ Claude applies the findings to actual code

AI Strengths at a Glance

| AI | Strengths | |----|-----------| | Gemini | Large context analysis (1M tokens), writing, research | | Codex | Code execution/editing, build/test, tunable reasoning_effort | | Claude | Orchestrator, conversation context, final judgment |


Tools

ask_gemini

Delegates tasks to Gemini CLI. Best for 1M token context, writing, and research.

| Parameter | Type | Required | Description | |-----------|------|----------|-------------| | prompt | string | Yes | Prompt text | | model | string | | Model override | | files | string[] | | File paths to inject as context | | directories | string[] | | Directories for Gemini to scan directly | | output_file | string | | File path to save the response | | working_directory | string | | Working directory | | background | boolean | | Run in background | | approval_mode | string | | yolo / auto_edit / plan |

ask_codex

Delegates tasks to Codex CLI. Directly control behavior with model and reasoning_effort.

| Parameter | Type | Required | Description | |-----------|------|----------|-------------| | prompt | string | Yes | Prompt text | | model | string | | Model override (default: Codex CLI default) | | reasoning_effort | string | | minimal / low / medium / high / xhigh | | files | string[] | | Context file paths | | output_file | string | | File path to save the response | | working_directory | string | | Working directory | | background | boolean | | Run in background | | writable | boolean | | Allow file modifications | | timeout_ms | number | | Foreground timeout in ms (default: 300,000) |

ask_claude

Delegates tasks to Claude Code CLI. Best for code generation, debugging, and refactoring.

| Parameter | Type | Required | Description | |-----------|------|----------|-------------| | prompt | string | Yes | Prompt text | | model | string | | claude-opus-4-6 / claude-sonnet-4-6 / claude-haiku-4-5 | | files | string[] | | Context file paths | | output_file | string | | File path to save the response | | working_directory | string | | Working directory | | background | boolean | | Run in background | | allowed_tools | string[] | | Allowed tools (Read, Write, Edit, Bash, etc.) |

Job Management

Manage background jobs.

| Tool | Description | |------|-------------| | wait_job | Wait for completion, return full stdout/stderr | | check_job | Non-blocking status check, 500-char stdout preview | | kill_job | Force terminate | | list_jobs | List jobs (filter: active/completed/failed/all) |

Environment Variables

| Variable | Default | Description | |----------|---------|-------------| | TED_CREW_PROVIDER | claude | Caller identity (claude / gemini / codex) | | TED_CREW_MAX_STDOUT | 10485760 | stdout collection limit (10MB) | | TED_CREW_TIMEOUT | 300000 | Default foreground timeout (5 min) |

Auto-Save Responses

Responses longer than 500 characters are automatically saved to .aidocs/ted-crew/{provider}-{date}-{time}.md and only a summary is returned. Specify working_directory to save under that project's directory.

Install Skills

Install routing guide skills for each AI client:

npx ted-crew install-skills

Or from source:

npm run install-skills

Install locations:

| AI | Path | |----|------| | Claude Code | ~/.claude/skills/ted-crew/ | | Gemini CLI | ~/.gemini/skills/ted-crew/ | | Codex CLI | ~/.codex/skills/ted-crew/ |

Project Structure

src/
├── index.ts                 # MCP server entry, tool filtering
├── lib/
│   ├── constants.ts         # Shared constants
│   ├── exchange.ts          # Response post-processing (file save)
│   ├── parser.ts            # CLI output parsing, error detection
│   ├── prompt-builder.ts    # Prompt construction
│   ├── spawner.ts           # CLI process spawning
│   └── types.ts             # TypeScript type definitions
└── tools/
    ├── ask-claude.ts        # Claude Code CLI caller
    ├── ask-codex.ts         # Codex CLI caller
    ├── ask-gemini.ts        # Gemini CLI caller
    └── jobs.ts              # Background job manager

Build

npm install
npm run build    # esbuild → dist/server.cjs
npm run dev      # watch mode

License

MIT