npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

anycrawl-mcp-server

v1.0.3

Published

AnyCrawl MCP Server - Adds powerful web scraping and crawling to Cursor, Claude and any other LLM clients

Readme

AnyCrawl MCP Server

🚀 AnyCrawl MCP Server — Powerful web scraping and crawling for Cursor, Claude, and other LLM clients via the Model Context Protocol (MCP).

Features

  • Web Scraping: Extract content from single URLs with multiple output formats
  • Website Crawling: Crawl entire websites with configurable depth and limits
  • Search Engine Integration: Search the web and optionally scrape results
  • Multiple Engines: Support for Playwright, Cheerio, and Puppeteer
  • Flexible Output: Markdown, HTML, text, screenshots, and structured JSON
  • Async Operations: Non-blocking crawl jobs with status monitoring
  • Error Handling: Robust error handling and logging
  • Multiple Modes: STDIO (default), MCP(HTTP), SSE; cloud-ready with Nginx proxy

Installation

Running with npx

ANYCRAWL_API_KEY=YOUR-API-KEY npx -y anycrawl-mcp

Manual installation

npm install -g anycrawl-mcp-server

ANYCRAWL_API_KEY=YOUR-API-KEY anycrawl-mcp

Configuration

AnyCrawl MCP Server supports two deployment modes: Cloud Service (recommended) and Self-Hosted.

Cloud Service (Recommended)

Use the AnyCrawl cloud service at mcp.anycrawl.dev. No server setup required.

# Only need your API key
export ANYCRAWL_API_KEY="your-api-key-here"

Cloud endpoints:

  • MCP (streamable_http): https://mcp.anycrawl.dev/{API_KEY}/mcp
  • SSE: https://mcp.anycrawl.dev/{API_KEY}/sse

Self-Hosted Deployment

For self-hosted deployments, configure the base URL to point to your own AnyCrawl API instance:

export ANYCRAWL_API_KEY="your-api-key-here"
export ANYCRAWL_BASE_URL="https://your-api-server.com"  # Your self-hosted API URL

For local development with custom host/port:

export ANYCRAWL_HOST="127.0.0.1"  # Default: mcp.anycrawl.dev (cloud)
export ANYCRAWL_PORT="3000"       # Default: 3000

Get your API key

  • Visit the AnyCrawl website and sign up or log in: AnyCrawl
  • 🎉 Sign up for free to receive 1,500 credits — enough to crawl nearly 1,500 pages.
  • Open the dashboard → API Keys → Copy your key.
  • Copy the key and set it as the ANYCRAWL_API_KEY environment variable (see above).

Usage

Available Modes

AnyCrawl MCP Server supports the following deployment modes:

Default mode is STDIO (no env needed). Set ANYCRAWL_MODE to switch.

| Mode | Description | Best For | Transport | | ------- | --------------------------------- | ----------------------------------------- | ----------- | | STDIO | Standard MCP over stdio (default) | Command-type MCP clients, local tooling | stdio | | MCP | Streamable HTTP (JSON, stateful) | Cursor (streamable_http), API integration | HTTP + JSON | | SSE | Server-Sent Events | Web apps, browser integrations | HTTP + SSE |

Quick Start Commands

# Development (local)
npm run dev            # STDIO (default)
npm run dev:mcp          # MCP mode (JSON /mcp)
npm run dev:sse          # SSE mode (/sse)

# Production (built output)
npm start              # STDIO (default)
npm run start:mcp
npm run start:sse

# Env examples
ANYCRAWL_MODE=MCP ANYCRAWL_API_KEY=YOUR-KEY npm run dev:mcp
ANYCRAWL_MODE=SSE ANYCRAWL_API_KEY=YOUR-KEY npm run dev:sse

Docker Compose (MCP + SSE with Nginx)

This repo ships a production-ready image that runs MCP (JSON) on port 3000 and SSE on port 3001 in the same container, fronted by Nginx. Nginx also supports API-key-prefixed paths /{API_KEY}/mcp and /{API_KEY}/sse and forwards the key via x-anycrawl-api-key header.

docker compose build
docker compose up -d

Environment variables used in Docker image:

  • ANYCRAWL_MODE: MCP_AND_SSE (default in compose), or MCP, SSE
  • ANYCRAWL_MCP_PORT: default 3000
  • ANYCRAWL_SSE_PORT: default 3001
  • CLOUD_SERVICE: true to extract API key from /{API_KEY}/... or headers
  • ANYCRAWL_BASE_URL: default https://api.anycrawl.dev

Running on Cursor

Configuring Cursor. Note: Requires Cursor v0.45.6+.

For Cursor v0.48.6 and newer, add this to your MCP Servers settings:

{
  "mcpServers": {
    "anycrawl-mcp": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "YOUR-API-KEY"
      }
    }
  }
}

For Cursor v0.45.6:

  1. Open Cursor Settings → Features → MCP Servers → "+ Add New MCP Server"
  2. Name: "anycrawl-mcp" (or your preferred name)
  3. Type: "command"
  4. Command:
env ANYCRAWL_API_KEY=YOUR-API-KEY npx -y anycrawl-mcp

On Windows, if you encounter issues:

cmd /c "set ANYCRAWL_API_KEY=YOUR-API-KEY && npx -y anycrawl-mcp"

Running on VS Code

For manual installation, add this JSON to your User Settings (JSON) in VS Code (Command Palette → Preferences: Open User Settings (JSON)):

{
  "mcp": {
    "inputs": [
      {
        "type": "promptString",
        "id": "apiKey",
        "description": "AnyCrawl API Key",
        "password": true
      }
    ],
    "servers": {
      "anycrawl": {
        "command": "npx",
        "args": ["-y", "anycrawl-mcp"],
        "env": {
          "ANYCRAWL_API_KEY": "${input:apiKey}"
        }
      }
    }
  }
}

Optionally, place the following in .vscode/mcp.json in your workspace to share config:

{
  "inputs": [
    {
      "type": "promptString",
      "id": "apiKey",
      "description": "AnyCrawl API Key",
      "password": true
    }
  ],
  "servers": {
    "anycrawl": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "${input:apiKey}"
      }
    }
  }
}

Running on Windsurf

Add this to ~/.codeium/windsurf/model_config.json:

{
  "mcpServers": {
    "mcp-server-anycrawl": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "YOUR_API_KEY"
      }
    }
  }
}

Running on Claude Desktop / Claude Code

Claude Code (CLI)

Add AnyCrawl MCP server using the claude mcp add command:

claude mcp add --transport stdio anycrawl -e ANYCRAWL_API_KEY=your-api-key -- npx -y anycrawl-mcp

Claude Desktop

Add this to your Claude Desktop configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "anycrawl": {
      "command": "npx",
      "args": ["-y", "anycrawl-mcp"],
      "env": {
        "ANYCRAWL_API_KEY": "YOUR_API_KEY"
      }
    }
  }
}

Running with SSE Server Mode

The SSE (Server-Sent Events) mode provides a web-based interface for MCP communication, ideal for web applications, testing, and integration with web-based LLM clients.

Quick Start

ANYCRAWL_MODE=SSE ANYCRAWL_API_KEY=YOUR-API-KEY npx -y anycrawl-mcp

# Or using npm scripts
ANYCRAWL_API_KEY=YOUR-API-KEY npm run dev:sse

Server Configuration

Optional server settings for local/self-hosted deployments:

export ANYCRAWL_PORT=3000           # Default: 3000
export ANYCRAWL_HOST=127.0.0.1      # Set to override cloud default (mcp.anycrawl.dev)

Health Check

curl -s http://localhost:${ANYCRAWL_PORT:-3000}/health
# Response: ok

Generic MCP/SSE Client Configuration

For other MCP/SSE clients that support SSE transport, use this configuration:

{
  "mcpServers": {
    "anycrawl": {
      "type": "sse",
      "url": "https://mcp.anycrawl.dev/{API_KEY}/sse",
      "name": "AnyCrawl MCP Server",
      "description": "Web scraping and crawling tools"
    }
  }
}

or

{
  "mcpServers": {
    "AnyCrawl": {
      "type": "streamable_http",
      "url": "https://mcp.anycrawl.dev/{API_KEY}/mcp"
    }
  }
}

Environment Setup:

# Start SSE server with API key
ANYCRAWL_API_KEY=your-api-key-here npm run dev:sse

Cursor configuration for HTTP modes (streamable_http)

Configure Cursor to connect to your HTTP MCP server.

Local HTTP Streamable Server:

{
  "mcpServers": {
    "anycrawl-http-local": {
      "type": "streamable_http",
      "url": "http://127.0.0.1:3000/mcp"
    }
  }
}

Cloud HTTP Streamable Server:

{
  "mcpServers": {
    "anycrawl-http-cloud": {
      "type": "streamable_http",
      "url": "https://mcp.anycrawl.dev/{API_KEY}/mcp"
    }
  }
}

Note: For HTTP modes, set ANYCRAWL_API_KEY (and optional host/port) in the server process environment or in the URL. Cursor does not need your API key when using streamable_http.

Available Tools

1. Scrape Tool (anycrawl_scrape)

Scrape a single URL and extract content in various formats.

Best for:

  • Extracting content from a single page
  • Quick data extraction
  • Testing specific URLs

Parameters:

  • url (required): The URL to scrape
  • engine (required): Scraping engine (playwright, cheerio, puppeteer)
  • formats (optional): Output formats (markdown, html, text, screenshot, screenshot@fullPage, rawHtml, json)
  • proxy (optional): Proxy URL
  • timeout (optional): Timeout in milliseconds (default: 300000)
  • retry (optional): Whether to retry on failure (default: false)
  • wait_for (optional): Wait time for page to load
  • include_tags (optional): HTML tags to include
  • exclude_tags (optional): HTML tags to exclude
  • json_options (optional): Options for JSON extraction

Example:

{
  "name": "anycrawl_scrape",
  "arguments": {
    "url": "https://example.com",
    "engine": "cheerio",
    "formats": ["markdown", "html"],
    "timeout": 30000
  }
}

2. Crawl Tool (anycrawl_crawl)

Start a crawl job to scrape multiple pages from a website. By default this waits for completion and returns aggregated results using the SDK's client.crawl (defaults: poll every 3 seconds, timeout after 60 seconds).

Best for:

  • Extracting content from multiple related pages
  • Comprehensive website analysis
  • Bulk data collection

Parameters:

  • url (required): The base URL to crawl
  • engine (required): Scraping engine
  • max_depth (optional): Maximum crawl depth (default: 10)
  • limit (optional): Maximum number of pages (default: 100)
  • strategy (optional): Crawling strategy (all, same-domain, same-hostname, same-origin)
  • exclude_paths (optional): URL patterns to exclude
  • include_paths (optional): URL patterns to include
  • scrape_options (optional): Options for individual page scraping
  • poll_seconds (optional): Poll interval seconds for waiting (default: 3)
  • timeout_ms (optional): Overall timeout milliseconds for waiting (default: 60000)

Example:

{
  "name": "anycrawl_crawl",
  "arguments": {
    "url": "https://example.com/blog",
    "engine": "playwright",
    "max_depth": 2,
    "limit": 50,
    "strategy": "same-domain",
    "poll_seconds": 3,
    "timeout_ms": 60000
  }
}

Returns: { "job_id": "...", "status": "completed", "total": N, "completed": N, "creditsUsed": N, "data": [...] }.

3. Crawl Status Tool (anycrawl_crawl_status)

Check the status of a crawl job.

Parameters:

  • job_id (required): The crawl job ID

Example:

{
  "name": "anycrawl_crawl_status",
  "arguments": {
    "job_id": "7a2e165d-8f81-4be6-9ef7-23222330a396"
  }
}

4. Crawl Results Tool (anycrawl_crawl_results)

Get results from a crawl job.

Parameters:

  • job_id (required): The crawl job ID
  • skip (optional): Number of results to skip (for pagination)

Example:

{
  "name": "anycrawl_crawl_results",
  "arguments": {
    "job_id": "7a2e165d-8f81-4be6-9ef7-23222330a396",
    "skip": 0
  }
}

5. Cancel Crawl Tool (anycrawl_cancel_crawl)

Cancel a pending crawl job.

Parameters:

  • job_id (required): The crawl job ID to cancel

Example:

{
  "name": "anycrawl_cancel_crawl",
  "arguments": {
    "job_id": "7a2e165d-8f81-4be6-9ef7-23222330a396"
  }
}

6. Search Tool (anycrawl_search)

Search the web using AnyCrawl search engine.

Best for:

  • Finding specific information across multiple websites
  • Research and discovery
  • When you don't know which website has the information

Parameters:

  • query (required): Search query
  • engine (optional): Search engine (google)
  • limit (optional): Maximum number of results (default: 10)
  • offset (optional): Number of results to skip (default: 0)
  • pages (optional): Number of pages to search
  • lang (optional): Language code
  • country (optional): Country code
  • scrape_options (required): Options for scraping search results
  • safeSearch (optional): Safe search level (0=off, 1=moderate, 2=strict)

Example:

{
  "name": "anycrawl_search",
  "arguments": {
    "query": "latest AI research papers 2024",
    "engine": "google",
    "limit": 5,
    "scrape_options": {
      "engine": "cheerio",
      "formats": ["markdown"]
    }
  }
}

Output Formats

Markdown

Clean, structured markdown content perfect for LLM consumption.

HTML

Raw HTML content with all formatting preserved.

Text

Plain text content with minimal formatting.

Screenshot

Visual screenshot of the page.

Screenshot@fullPage

Full-page screenshot including content below the fold.

Raw HTML

Unprocessed HTML content.

JSON

Structured data extraction using custom schemas.

Engines

Cheerio

  • Fast and lightweight
  • Good for static content
  • Server-side rendering

Playwright

  • Full browser automation
  • JavaScript rendering
  • Best for dynamic content

Puppeteer

  • Chrome/Chromium automation
  • Good balance of features and performance

Error Handling

The server provides comprehensive error handling:

  • Validation Errors: Invalid parameters or missing required fields
  • API Errors: AnyCrawl API errors with detailed messages
  • Network Errors: Connection and timeout issues
  • Rate Limiting: Automatic retry with backoff

Logging

The server includes detailed logging:

  • Debug: Detailed operation information
  • Info: General operation status
  • Warn: Non-critical issues
  • Error: Critical errors and failures

Set log level with environment variable:

export LOG_LEVEL=debug  # debug, info, warn, error

Development

Prerequisites

  • Node.js 18+
  • npm

Setup

git clone <repository>
cd anycrawl-mcp
npm ci

Build

npm run build

Test

npm test

Lint

npm run lint

Format

npm run format

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Run tests: npm test
  4. Submit a pull request

License

MIT License - see LICENSE file for details

Support

About AnyCrawl

AnyCrawl is a powerful Node.js/TypeScript crawler that turns websites into LLM-ready data and extracts structured SERP results from Google/Bing/Baidu/etc. It features native multi-threading for bulk processing and supports multiple output formats.

  • Website: https://anycrawl.dev
  • GitHub: https://github.com/any4ai/anycrawl
  • API: https://api.anycrawl.dev