npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@glutamateapp/docsassist

v1.0.2

Published

Documentation Assistant MCP server for processing and responding to documentation queries

Readme

Documentation Assistant MCP Server

A Model Context Protocol (MCP) server implementation for scraping, indexing, and searching documentation using Server-Sent Events (SSE).

Features

  • SSE-based communication
  • Documentation scraping and indexing with background job support
  • Full-text search capabilities with content segment highlighting
  • Local caching of documentation and sitemaps
  • Intelligent sitemap generation
  • Built with TypeScript and Express
  • Follows MCP specifications

Installation

npm install

Building

npm run build

Running the Server

# Start with default port (9031)
npm start

# Start with custom port
node dist/index.js --port=9006

# Start with custom cache directory
node dist/index.js --cache-dir=/path/to/cache

The server will start on port 9031 by default.

API Endpoints

  • SSE Connection: GET http://localhost:9031/sse
  • Message Endpoint: POST http://localhost:9031/messages

Available Tools

1. get_cache_info

Get information about the cache location and contents.

{
  // No parameters required
}

2. get_sitemap

Creates or retrieves a sitemap by crawling the site for metadata and link structure. Returns a job ID for tracking progress.

{
  url: string; // The URL of the documentation site
}

3. get_job_status

Returns the current status, progress, and results of a background job.

{
  jobId: string; // The ID of the job to check
}

4. list_jobs

Returns a list of all background jobs with their current status and progress.

{
  // No parameters required
}

5. scrape_docs

Scrapes and indexes pages matching a search query. Uses cached data if available unless forceScrape=true. Returns a job ID for tracking progress.

{
  url: string;      // The URL of the documentation to scrape
  query: string;    // The search query to match content
  maxResults?: number;  // Maximum number of matching results (default: 5)
  forceScrape?: boolean;  // Whether to force a new scrape or use cache (default: false)
}

6. search_docs

Search through previously scraped documentation across single or multiple URLs. Returns relevant matches with title, description, URL and content segments, sorted by relevance.

{
  url: string | string[];  // Single URL or array of URLs to search
  query: string;           // The search query
  maxResults?: number;     // Maximum number of results (default: 10)
  maxSegments?: number;    // Maximum number of content segments per result (default: 3)
}

Cache Directory Structure

The server uses a local cache directory to store scraped documentation and sitemaps. By default, it's located at:

.cache/
  |- {base64_url}_sitemap.json  // Sitemap cache
  |- {base64_url}.json         // Documentation content cache

You can configure the cache directory location using:

  • Command line: --cache-dir=/path/to/cache
  • Environment variable: DOCS_CACHE_DIR=/path/to/cache

Background Jobs

The server supports long-running operations through a background job system. Jobs can be:

  • PENDING: Job created but not started
  • RUNNING: Job is currently executing
  • COMPLETED: Job finished successfully
  • FAILED: Job encountered an error

Use the get_job_status tool to monitor job progress and retrieve results.

License

MIT