npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@versedhand/research-swarm

v0.2.1

Published

Parallel research using Claude CLI workers. MCP server that orchestrates multiple claude -p processes for deep research.

Readme

research-swarm

MCP server that launches parallel claude -p workers to research a topic. Each worker independently searches different aspects, validates output quality, retries failures, and optionally synthesizes results.

Workers run on your Claude Max subscription at $0 marginal cost.

Requirements

  • Node.js 18+
  • Claude Code CLI installed and authenticated
  • Claude Max subscription (workers use claude -p)

Install

cd research-swarm
npm install
npm run build

Register as MCP server

Add to your project's .mcp.json:

{
  "mcpServers": {
    "research-swarm": {
      "command": "node",
      "args": ["/path/to/research-swarm/dist/index.js"],
      "env": {
        "RESEARCH_SWARM_OUTPUT": "/path/to/output"
      }
    }
  }
}

Restart Claude Code. Four tools will be available.

Tools

| Tool | Description | |------|-------------| | research | Start a research job. Returns job_id immediately. Workers spawn in background. | | research_status | Non-blocking status check. Reads from disk. Always instant. | | research_results | Get completed output (synthesis or concatenated worker results). | | research_cancel | Kill all worker processes for a job. |

Use

From any Claude Code session:

> research swarm on [topic]

Or invoke tools directly:

  • research(topic, questions, depth?) — Start a job. Depth: quick (3 workers), standard (5 + synthesis), thorough (8 + synthesis).
  • research_status(job_id?) — Check progress. Shows phase, worker status, quality scores.
  • research_results(job_id?) — Get the report when complete.
  • research_cancel(job_id?) — Kill workers and stop the job.

Expect 5-8 minutes for quick depth, 8-12 for standard.

Configuration

| Environment variable | Default | Description | |---------------------|---------|-------------| | RESEARCH_SWARM_OUTPUT | ~/.research-swarm/jobs/ | Job output directory |

How it works

Claude Code session
  └─ MCP server (TypeScript, stdio transport)
       └─ launch-worker.sh (clean env, detached process)
            └─ claude -p worker (--allowedTools, --strict-mcp-config)
  1. Plan — Partitions the research space into non-overlapping worker domains
  2. Research — Launches workers with 10s stagger. Each independently searches and writes structured findings
  3. Validate — Scores each output on a 0-100 rubric
  4. Retry — Workers scoring below 40/100 get one retry
  5. Synthesize — A final worker cross-references and deduplicates (standard+ depth)
  6. Store — Results saved with full metadata

Non-blocking design

The Python predecessor froze because orchestration blocked the MCP event loop. This version:

  • Spawns workers as detached child processes (detached: true + proc.unref())
  • Monitors via setInterval polling (non-blocking)
  • Status reads state.json from disk — always instant
  • No blocking awaits in any tool handler

Worker architecture

Workers are claude -p processes launched with:

  • --allowedTools — Read, Write, Edit, Glob, Grep, WebSearch, WebFetch
  • --strict-mcp-config — Empty config prevents loading project MCP servers
  • Clean environmentCLAUDECODE stripped to defeat the nesting guard
  • Process isolationdetached: true creates a new process group per worker

Quality scoring rubric

| Check | Points | |-------|--------| | File exists and >500 bytes | 20 | | Has >= 3 Finding sections | 20 | | Has source citations (URLs) | 20 | | Has confidence ratings | 10 | | Has YAML frontmatter | 10 | | Addresses assigned questions | 20 |

Retry threshold: <40. Pass threshold: >=60.

Worker domains

Quick (3 workers)

| Worker | Domain | Strategy | |--------|--------|----------| | 01-web-broad | General web | 5-8 diverse search queries | | 02-web-deep | Technical/forums | Reddit, HN, Stack Exchange, blogs | | 03-data | Structured data | Reports, .gov, .edu, industry data |

Standard (5 workers + synthesis)

Quick plus: | Worker | Domain | |--------|--------| | 04-academic | Research papers, peer-reviewed studies | | 05-contrarian | Dissenting opinions, failure cases |

Thorough (8 workers + synthesis)

Standard plus: | Worker | Domain | |--------|--------| | 06-recent | Last 6-12 months only | | 07-practitioners | Case studies, real-world examples | | 08-adjacent | Transferable insights from neighboring fields |

Project structure

research-swarm/
├── src/
│   ├── index.ts          # MCP server entry point (4 tools)
│   ├── orchestrator.ts   # Job lifecycle, worker spawning
│   ├── workers.ts        # Process management, monitoring
│   ├── quality.ts        # Output quality scoring
│   ├── prompts.ts        # Domain-based prompt generation
│   └── types.ts          # TypeScript interfaces
├── scripts/
│   ├── launch-worker.sh  # Clean-env wrapper for claude -p
│   ├── CLAUDE.md         # Worker behavioral instructions
│   └── empty-mcp.json    # Empty MCP config for fast startup
├── dist/                 # Compiled output (npm run build)
├── _python_archive/      # Previous Python implementation
├── package.json
└── tsconfig.json

License

MIT