npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@rio-cloud/uikit-mcp

v1.1.8

Published

MCP server that serves the captured RIO UI Kit documentation to Model Context Protocol clients.

Readme

@rio-cloud/uikit-mcp

Overview

  • Model Context Protocol (MCP) server that exposes the captured RIO UI Kit documentation as Markdown resources plus a keyword search tool.
  • Ships with the full dataset - no network calls or external services are required at runtime.
  • search_docs must always be used before readResource; listResources is a static inventory snapshot and not intended for guessing IDs.
  • readResource returns rich Markdown (sections, examples, code tabs, props tables) for the URI returned by search_docs.
  • Runtime output applies defensive sanitisation for unsafe link schemes and dangerous raw HTML outside fenced code blocks.

MCP Client Configuration

Add the server to an MCP-aware client configuration (example for Codex CLI):

[mcp_servers.uikit]
command = "npx"
args = ["-y", "@rio-cloud/uikit-mcp"]
startup_timeout_sec = 30

Preferred Usage: Native $rio-uikit Skill

  • For day-to-day coding tasks, use the native $rio-uikit skill.
  • The skill enforces the required MCP workflow (search_docsreadResource) and applies RIO UI Kit coding defaults.
  • This repository remains fully usable as a standalone MCP server when a native skill runtime is not available.

Available Resources & Tools

Use this section for standalone MCP usage only (that is, when $rio-uikit is not being used).

  • Resource namespace: uikit-doc://<route> (e.g. uikit-doc://components/button).
    • Response includes category, section, source URL, captured timestamp, section bodies, rendered example HTML, code tabs, props tables, and See Also links.
    • Resource IDs use the full hash path (e.g. uikit-doc://start/guidelines/print-css); always resolve via search_docs instead of guessing or hardcoding IDs.
  • Tools
    • search_docs — FlexSearch-backed keyword lookup. Input { query: string; limit?: number }, output { results: [...] }. Always call readResource on returned URIs before acting.

Development

Note: The following sections are only relevant for contributors working on this repository.

Requirements

  • Node 22+ (tsdown target)
  • Dev dependencies must be installed (tsx is required for the tsdown build hook and tests)

Setup

git clone <repository>
cd uikit-mcp
npm install
npm run build:server

Available Scripts

| Script | Description | |--------|-------------| | npm run build:server | Build server + generate docs (via tsdown hook) | | npm run mcp:server | Build and start server locally | | npm run crawl:full | Run crawler and rebuild (full pipeline) | | npm test | Run tests (builds first via pretest) |

Architecture

The server uses a build-time compilation approach for fast startup:

  1. Build time (npm run build:server):

    • tsdown bundles server/index.tsdist/index.mjs
    • build:done hook generates pre-compiled docs:
      • data/pages/**/*.jsondist/docs/**/*.md
      • Creates dist/doc-metadata.json (metadata for search index)
      • Copies dist/search-synonyms.json and dist/version.json
  2. Runtime (server start):

    • Loads 2 JSON files
    • Builds FlexSearch index
    • Markdown files loaded on-demand (lazy loading)

Version Information

| Context | Path | Description | |---------|------|-------------| | Source (Repository) | data/version.json | Written by crawler, source of truth | | Published (npm Package) | dist/version.json | Copied during build |

Crawler Scripts

Crawler scripts live in crawler/ and stay out of the npm bundle:

  • Download Chromium once: npm run setup:crawler
  • One-shot full run: npm run crawl:full (ensures Playwright Chromium, runs crawl:navigation, capture:all -- --force --concurrency=5, then npm test)
  • Refresh navigation snapshot: npm run crawl:navigation
  • Capture the full dataset: npm run capture:all with optional flags:
    • --retries=3 (per-route retry count)
    • --concurrency=5 (parallel workers; 5 recommended for full crawl)
    • --force (run the full crawl even when data/version.json already matches the current upstream version)
    • capture:all always performs a transactional full crawl: the current UI Kit version is checked before any mutation, fresh artefacts are written into staging, and data/ is only replaced after a fully successful run.
    • data/version.json is updated only after a successful full crawl and remains the marker for a complete captured dataset.
  • Route-specific note: #start/changelog is captured with a dedicated parser that groups the modern version cards (badges preserved) and ignores the legacy “Show older versions” list.

Package Contents

The published npm package contains only:

dist/
├── index.mjs           # Server binary (with shebang)
├── docs/**/*.md        # Pre-rendered Markdown files
├── doc-metadata.json   # Metadata for search index
├── search-synonyms.json
└── version.json
README.md
LICENSE

No data/ or crawler/ directories are included.

License

Licensed under the Apache License 2.0.