npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

websum-mcp

v0.6.0

Published

An MCP server for fetching and summarizing web pages

Readme

websum-mcp

An MCP server for fetching the content of web pages as markdown and optionally summarizing it by asking an LLM to extract relevant snippets to reduce the token footprint.

Use case: use as a web fetching tool for local LLMs with limited context size. Can be used as a drop-in replacement for the webfetch tool in Claude code or opencode (or other coding TUIs).

Features

  • Fetch web pages via URL.
  • Convert HTML content to Markdown.
  • Summarize content using your LLM of choice when content size exceeds a configurable limit.
    • Supports any OpenAI-compatible API.

Prompt being sent ot the summarizing LLM (as defined in src/services/summarizer.ts):

You are a High-Fidelity Snippet Extractor. Your task is to read a web page dump in markdown format and output a handful of relevant excerpts. You must act as a precise filter: discarding noise while preserving key signal from the original document.

### RULES:
- **VERBATIM ONLY**: Do not rewrite, summarize, or fix grammar. Copy-paste exactly. No greetings, commentary, meta-text or reasoning in output.
- **NO WEB NOISE**: Aggressively remove navigation menus, footer links, "sign up" forms, "related articles", cookie warnings, etc.
- **FACTUAL**: Keep as many technical details as possible (such as code snippets) if relevant to the subject at hand.
- Prefer extracting whole paragraphs over fragmented sentences.
- Keep it short and to the point.

### FOCUS CONTEXT: 
The user is specifically looking for information matching this description: "(user-provided context)"

### SOURCE DOCUMENT:
<DOCUMENT_START>
(requested url content in markdown format)
<DOCUMENT_END>

Based on the FOCUS CONTEXT, generate the list of verbatim excerpts from the SOURCE DOCUMENT now.
Output:

Tools

fetch_url

Fetch a webpage, convert to markdown, and summarize if necessary.

Parameters:

  • url (string, required): The URL to fetch.
  • context (string, optional): The specific information you need from the page to ensure a relevant summary.

Configuration

The server is configured via environment variables:

Mandatory parameter:

  • BASE_URL: The base URL of the OpenAI-compatible API.
    • e.g., https://api.openai.com/v1 or http://localhost:8080/v1

Optional parameters:

  • API_KEY: The key for the API (default: no-key-required)
  • MODEL_NAME: The name of the summarization model (default: gpt-oss-20b)
  • MAX_TOKENS: The maximum number of tokens allowed before summarization is triggered (default: 4096)
  • MAX_CONTEXT_LENGTH: The maximum context length for the summarization model (default: 131072)
  • REQUEST_TIMEOUT: URL fetching timeout in seconds (default: 10)
  • SUMMARIZER_TIMEOUT: Summarizer API timeout in seconds (default: 600)
  • MAX_PAYLOAD_SIZE: Maximum size of the the HTTP response content allowed in MB (default: 10)
  • USER_AGENT (defaults to a sensible value)

Example outputs

Open-weight model gpt-oss-20b performs surprinsingly well. I am using Unsloth's F16 version running on llama.cpp with temperature = 0 and reasoning: low (and otherwise recommended paramaters).

  • Example 1:

  • Example 2:

    • URL: https://github.com/ggerganov/llama.cpp
    • Context: "Extract supported model formats, hardware requirements, and basic usage example."
    • View ouput from example 2
  • Example 3:

Installation & Configuration

This package is available on npm: https://www.npmjs.com/package/websum-mcp

Claude code config

{
  "mcpServers": {
    "websum": {
      "command": "npx",
      "args": ["-y", "websum-mcp"],
      "env": {
        "BASE_URL": "http://localhost:8080/v1",
        "API_KEY": "no-key-required",
        "MODEL_NAME": "gpt-oss-20b",
        "MAX_TOKENS": "4096",
        "MAX_CONTEXT_LENGTH": "131072"
      }
    }
  }
}

opencode config

Add to the mcp section your opencode.json config file:

{
  "mcp": {
    "websum": {
      "type": "local",
      "command": [
        "npx",
        "-y",
        "websum-mcp"
      ],
      "environment": {
        "BASE_URL": "http://localhost:8080/v1",
        "API_KEY": "no-key-required",
        "MODEL_NAME": "gpt-oss-20b",
        "MAX_TOKENS": "4096",
        "MAX_CONTEXT_LENGTH": "131072"
      },
      "enabled": true
    }
  }
}

Docker

docker build -t websum-mcp .
docker run -i websum-mcp

Development

  1. Install dependencies:
    npm install
  2. Build:
    npm run build
  3. Run tests:
    npm test
  4. Locally test the MCP server:
    npx @modelcontextprotocol/inspector npx -y websum-mcp