@rio-cloud/uikit-mcp
v1.1.8
Published
MCP server that serves the captured RIO UI Kit documentation to Model Context Protocol clients.
Keywords
Readme
@rio-cloud/uikit-mcp
Overview
- Model Context Protocol (MCP) server that exposes the captured RIO UI Kit documentation as Markdown resources plus a keyword search tool.
- Ships with the full dataset - no network calls or external services are required at runtime.
search_docsmust always be used beforereadResource; listResources is a static inventory snapshot and not intended for guessing IDs.readResourcereturns rich Markdown (sections, examples, code tabs, props tables) for the URI returned bysearch_docs.- Runtime output applies defensive sanitisation for unsafe link schemes and dangerous raw HTML outside fenced code blocks.
MCP Client Configuration
Add the server to an MCP-aware client configuration (example for Codex CLI):
[mcp_servers.uikit]
command = "npx"
args = ["-y", "@rio-cloud/uikit-mcp"]
startup_timeout_sec = 30Preferred Usage: Native $rio-uikit Skill
- For day-to-day coding tasks, use the native
$rio-uikitskill. - The skill enforces the required MCP workflow (
search_docs→readResource) and applies RIO UI Kit coding defaults. - This repository remains fully usable as a standalone MCP server when a native skill runtime is not available.
Available Resources & Tools
Use this section for standalone MCP usage only (that is, when $rio-uikit is not being used).
- Resource namespace:
uikit-doc://<route>(e.g.uikit-doc://components/button).- Response includes category, section, source URL, captured timestamp, section bodies, rendered example HTML, code tabs, props tables, and See Also links.
- Resource IDs use the full hash path (e.g.
uikit-doc://start/guidelines/print-css); always resolve viasearch_docsinstead of guessing or hardcoding IDs.
- Tools
search_docs— FlexSearch-backed keyword lookup. Input{ query: string; limit?: number }, output{ results: [...] }. Always callreadResourceon returned URIs before acting.
Development
Note: The following sections are only relevant for contributors working on this repository.
Requirements
- Node 22+ (tsdown target)
- Dev dependencies must be installed (tsx is required for the tsdown build hook and tests)
Setup
git clone <repository>
cd uikit-mcp
npm install
npm run build:serverAvailable Scripts
| Script | Description |
|--------|-------------|
| npm run build:server | Build server + generate docs (via tsdown hook) |
| npm run mcp:server | Build and start server locally |
| npm run crawl:full | Run crawler and rebuild (full pipeline) |
| npm test | Run tests (builds first via pretest) |
Architecture
The server uses a build-time compilation approach for fast startup:
Build time (
npm run build:server):- tsdown bundles
server/index.ts→dist/index.mjs build:donehook generates pre-compiled docs:data/pages/**/*.json→dist/docs/**/*.md- Creates
dist/doc-metadata.json(metadata for search index) - Copies
dist/search-synonyms.jsonanddist/version.json
- tsdown bundles
Runtime (server start):
- Loads 2 JSON files
- Builds FlexSearch index
- Markdown files loaded on-demand (lazy loading)
Version Information
| Context | Path | Description |
|---------|------|-------------|
| Source (Repository) | data/version.json | Written by crawler, source of truth |
| Published (npm Package) | dist/version.json | Copied during build |
Crawler Scripts
Crawler scripts live in crawler/ and stay out of the npm bundle:
- Download Chromium once:
npm run setup:crawler - One-shot full run:
npm run crawl:full(ensures Playwright Chromium, runscrawl:navigation,capture:all -- --force --concurrency=5, thennpm test) - Refresh navigation snapshot:
npm run crawl:navigation - Capture the full dataset:
npm run capture:allwith optional flags:--retries=3(per-route retry count)--concurrency=5(parallel workers; 5 recommended for full crawl)--force(run the full crawl even whendata/version.jsonalready matches the current upstream version)capture:allalways performs a transactional full crawl: the current UI Kit version is checked before any mutation, fresh artefacts are written into staging, anddata/is only replaced after a fully successful run.data/version.jsonis updated only after a successful full crawl and remains the marker for a complete captured dataset.
- Route-specific note:
#start/changelogis captured with a dedicated parser that groups the modern version cards (badges preserved) and ignores the legacy “Show older versions” list.
Package Contents
The published npm package contains only:
dist/
├── index.mjs # Server binary (with shebang)
├── docs/**/*.md # Pre-rendered Markdown files
├── doc-metadata.json # Metadata for search index
├── search-synonyms.json
└── version.json
README.md
LICENSENo data/ or crawler/ directories are included.
License
Licensed under the Apache License 2.0.
