npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@robot-resources/scraper-mcp

v0.1.2

Published

MCP server for Scraper — context compression for AI agents

Readme

npm version License: MIT

@robot-resources/scraper-mcp

MCP server for Scraper — context compression for AI agents.

What is Robot Resources?

Human Resources, but for your AI agents.

Robot Resources gives AI agents two superpowers:

  • Router — Routes each LLM call to the cheapest capable model. 60-90% cost savings across OpenAI, Anthropic, and Google.
  • Scraper — Compresses web pages to clean markdown. 70-80% fewer tokens per page.

Both run locally. Your API keys never leave your machine. Free, unlimited, no tiers.

Install the full suite

npx robot-resources

One command sets up everything. Learn more at robotresources.ai


About this MCP server

This package gives AI agents two tools to compress web content into token-efficient markdown via the Model Context Protocol: single-page compression and multi-page BFS crawling.

Installation

npx @robot-resources/scraper-mcp

Or install globally:

npm install -g @robot-resources/scraper-mcp

Claude Desktop Configuration

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "scraper": {
      "command": "npx",
      "args": ["-y", "@robot-resources/scraper-mcp"]
    }
  }
}

Tools

scraper_compress_url

Compress a single web page into markdown with 70-90% fewer tokens.

Parameters:

| Parameter | Type | Required | Default | Description | |-----------|------|----------|---------|-------------| | url | string | yes | — | URL to compress | | mode | string | no | 'auto' | 'fast', 'stealth', 'render', or 'auto' | | timeout | number | no | 10000 | Fetch timeout in milliseconds | | maxRetries | number | no | 3 | Max retry attempts (0-10) |

Example prompt: "Compress https://docs.example.com/getting-started"

scraper_crawl_url

Crawl multiple pages from a starting URL using BFS link discovery.

Parameters:

| Parameter | Type | Required | Default | Description | |-----------|------|----------|---------|-------------| | url | string | yes | — | Starting URL to crawl | | maxPages | number | no | 10 | Max pages to crawl (1-100) | | maxDepth | number | no | 2 | Max link depth (0-5) | | mode | string | no | 'auto' | 'fast', 'stealth', 'render', or 'auto' | | include | string[] | no | — | URL patterns to include (glob) | | exclude | string[] | no | — | URL patterns to exclude (glob) | | timeout | number | no | 10000 | Per-page timeout in milliseconds |

Example prompt: "Crawl the docs at https://docs.example.com with max 20 pages"

Fetch Modes

| Mode | How | Use when | |------|-----|----------| | 'fast' | Plain HTTP | Default sites, APIs, docs | | 'stealth' | TLS fingerprint impersonation | Anti-bot protected sites | | 'render' | Headless browser (Playwright) | JS-rendered SPAs | | 'auto' | Fast → stealth fallback on 403/challenge | Unknown sites (default) |

Stealth requires impit and render requires playwright as peer dependencies of @robot-resources/scraper.

Requirements

  • Node.js 18+

Related

License

MIT