ownrig-mcp
v1.0.0
Published
OwnRig MCP Server — AI hardware compatibility data for Claude, ChatGPT, Cursor, and any MCP-compatible assistant. 50 models, 25 devices, 9 machines, 663 compatibility entries.
Maintainers
Readme
OwnRig MCP Server
AI hardware compatibility data for any MCP-compatible assistant. Query 50 models, 25 devices, 14 builds, 9 ready-to-buy machines, and 663 compatibility entries.
Transport: stdio
Install
npm install -g ownrig-mcpOr use directly with npx:
npx ownrig-mcpTools
| Tool | Description |
|------|-------------|
| query_model | Get details for a specific AI model (VRAM, formats, use cases) |
| query_device | Get specs for a GPU or Apple Silicon device |
| query_compatibility | Check if a model runs on a device (tokens/sec, VRAM fit) |
| list_models | List models with optional use_case / family filter |
| list_devices | List devices with optional type / min_vram filter |
| list_builds | List curated builds with optional tier / profile filter |
| list_systems | List ready-to-buy machines (Mac, Dell, ASUS) with optional brand / type filter |
| query_system | Get full details for a specific ready-to-buy machine |
| recommend_build | Full recommendation engine — 3 paths (model→hw, workflow→hw, hw→models) |
| find_models_for_device | "What can I run on my RTX 4090?" |
| find_devices_for_model | "What GPU do I need for Llama 3.1 70B?" |
| list_workflows | List workflow profiles (tools → hardware requirements) |
Usage with Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"ownrig": {
"command": "npx",
"args": ["-y", "ownrig-mcp"]
}
}
}Usage with Cursor
Add to .cursor/mcp.json in your project:
{
"mcpServers": {
"ownrig": {
"command": "npx",
"args": ["-y", "ownrig-mcp"]
}
}
}Usage from source (development)
If you have the OwnRig repo cloned:
# From project root
npm install
npm run generate:rec-data
npm run mcpThe mcp script builds a self-contained bundle via esbuild (resolving all @/ path aliases) then runs it. Running tsx mcp-server/index.ts directly does not work because the engine uses TypeScript path aliases that tsx cannot resolve transitively across module boundaries.
For your MCP client config, point to the built bundle:
{
"mcpServers": {
"ownrig": {
"command": "node",
"args": ["mcp-server/dist/index.mjs"],
"cwd": "/path/to/ownrig"
}
}
}Example queries
Once connected, ask your AI assistant:
- "What GPU do I need to run Llama 3.1 70B locally?"
- "Can an RTX 4090 run Qwen 3 32B?"
- "Recommend a build for running AI coding tools with Cursor"
- "What models can I run on my M4 Max MacBook Pro?"
- "Compare the Mac Studio M4 Ultra vs a custom build for AI"
Data
This package includes a snapshot of OwnRig's verified hardware compatibility data. The data is updated with each package release.
- 50 AI models with VRAM requirements per quantization level
- 25 GPUs and Apple Silicon devices with specs and pricing
- 14 curated PC builds with component lists and benchmarks
- 9 ready-to-buy machines (Mac, Dell, ASUS, NVIDIA)
- 663 model × device × quantization compatibility entries
- 7 workflow profiles mapping AI tools to hardware needs
Source: ownrig.com | License: MIT
