@rio-cloud/uikit-mcp
v1.1.11
Published
MCP server that serves the captured RIO UI Kit documentation to Model Context Protocol clients.
Keywords
Readme
@rio-cloud/uikit-mcp
Overview
- Model Context Protocol (MCP) server that exposes the captured RIO UI Kit documentation as Markdown resources plus a keyword search tool.
- Ships with the full dataset - no network calls or external services are required at runtime.
search_docsmust always be used beforereadResource; listResources is a static inventory snapshot and not intended for guessing IDs.readResourcereturns rich Markdown (sections, examples, code tabs, props tables) for the URI returned bysearch_docs.- Runtime output applies defensive sanitisation for unsafe link schemes and dangerous raw HTML outside fenced code blocks.
MCP Client Configuration
Add the server to an MCP-aware client configuration (example for Codex CLI):
[mcp_servers.uikit]
command = "npx"
args = ["-y", "@rio-cloud/uikit-mcp"]
startup_timeout_sec = 30Available Resources & Tools
Use this section for standalone MCP usage only (that is, when $rio-uikit is not being used).
- Resource namespace:
uikit-doc://<route>(e.g.uikit-doc://components/button).- Response includes category, section, source URL, captured timestamp, section bodies, rendered example HTML, code tabs, props tables, and See Also links.
- Resource IDs use the full hash path (e.g.
uikit-doc://start/guidelines/print-css); always resolve viasearch_docsinstead of guessing or hardcoding IDs.
- Tools
search_docs— FlexSearch-backed keyword lookup. Input{ query: string; limit?: number }, output{ results: [...] }. Always callreadResourceon returned URIs before acting. Some results may include a matched heading (for exampleFoundations: Pagination) while still pointing to the real resource URI; after reading the resource, use that heading as the relevant section.
Development
Note: The following sections are only relevant for contributors working on this repository.
Requirements
- Node 24+ and npm 11+
- Dev dependencies must be installed (tsx is required for the tsdown build hook and tests)
- The repository includes a root
.npmrcwithignore-scripts=true; keep installs repo-local so the npm lifecycle-script baseline is enforced consistently.
Setup
git clone <repository>
cd uikit-mcp
npm install
npm run build:serverIf you need Playwright browser binaries for crawling, install them explicitly after npm install:
npm run setup:crawlerDevelopment Skills
This repository also keeps repo-versioned Codex skills under development-skills/ for recurring maintenance workflows:
uikit-mcp-vulnerability-check- Use when checking npm vulnerabilities, assessing whether repository code paths are actually affected, and deciding between remediation, deferral, or accepted risk.
uikit-mcp-dependency-update-check- Use when checking npm dependency updates and preparing actionable update plans only for policy-compliant candidates.
Available Scripts
| Script | Description |
|--------|-------------|
| npm run build:server | Build server + generate docs (via tsdown hook) |
| npm run mcp:server | Build and start server locally |
| npm run crawl:full | Run crawler and rebuild (full pipeline) |
| npm test | Run tests (builds first via pretest) |
Architecture
The server uses a build-time compilation approach for fast startup:
Build time (
npm run build:server):- tsdown bundles
server/index.ts→dist/index.mjs build:donehook generates pre-compiled docs:data/pages/**/*.json→dist/docs/**/*.md- Creates
dist/doc-metadata.json(metadata for search index) - Copies
dist/search-synonyms.jsonanddist/version.json
- tsdown bundles
Runtime (server start):
- Loads 2 JSON files
- Builds FlexSearch index
- Adds search-only aliases for headings inside the single Foundations resource
- Markdown files loaded on-demand (lazy loading)
Version Information
| Context | Path | Description |
|---------|------|-------------|
| Source (Repository) | data/version.json | Written by crawler, source of truth |
| Published (npm Package) | dist/version.json | Copied during build |
Crawler Scripts
Crawler scripts live in crawler/ and stay out of the npm bundle:
- Download Chromium once:
npm run setup:crawler - One-shot full run:
npm run crawl:full(ensures Playwright Chromium, runscrawl:navigation,capture:all -- --force --concurrency=5, thennpm test) - Refresh navigation snapshot:
npm run crawl:navigation - Capture the full dataset:
npm run capture:allwith optional flags:--retries=3(per-route retry count)--concurrency=5(parallel workers; 5 recommended for full crawl)--force(run the full crawl even whendata/version.jsonalready matches the current upstream version)capture:allalways performs a transactional full crawl: the current UI Kit version is checked before any mutation, fresh artefacts are written into staging, anddata/is only replaced after a fully successful run.data/version.jsonis updated only after a successful full crawl and remains the marker for a complete captured dataset.
- Route-specific note:
#start/changelogis captured with a dedicated parser that groups the modern version cards (badges preserved) and ignores the legacy “Show older versions” list.
Package Contents
The published npm package contains only:
dist/
├── index.mjs # Server binary (with shebang)
├── docs/**/*.md # Pre-rendered Markdown files
├── doc-metadata.json # Metadata for search index
├── search-synonyms.json
└── version.json
README.md
LICENSENo data/ or crawler/ directories are included.
License
Licensed under the Apache License 2.0.
