@tevihq/qc-mcp
v1.4.0
Published
MCP Server for Tevi QC - Test Case Management
Readme
Tevi QC MCP Server
MCP (Model Context Protocol) server for Tevi QC - allows Claude to interact directly with test case management, element registry, test data, and analytics.
Prerequisites
- Node.js >= 18 (recommended: use nvm)
- Git
- Claude Code CLI or Claude Desktop
Installation
Step 1: Clone and build
git clone https://github.com/Tevi-Space/tevi-qc-mcp.git
cd tevi-qc-mcp
npm install
npm run buildStep 2: Configure API key
Get your API key from https://qc.tevi.dev (Login > Settings > API Key), then create a .env file:
cp .env.example .envEdit .env and replace with your API key:
TEVI_QC_API_URL=https://qc.tevi.dev/api
TEVI_QC_API_KEY=your-api-key-here
TEVI_QC_UI_URL=https://qc.tevi.devStep 3: Register MCP server
Choose the setup that matches your environment:
Option A: Claude Code CLI
Run in Claude Code:
/mcpThen add a new stdio server with:
- Name:
tevi-qc - Command:
node - Args:
/absolute/path/to/tevi-qc-mcp/dist/index.js
Or manually add to ~/.claude/settings.json:
{
"mcpServers": {
"tevi-qc": {
"command": "node",
"args": ["/absolute/path/to/tevi-qc-mcp/dist/index.js"]
}
}
}Option B: VSCode Extension (Claude Code in VSCode)
Create a .mcp.json file in your project root (the directory you open in VSCode):
{
"mcpServers": {
"tevi-qc": {
"command": "node",
"args": ["/absolute/path/to/tevi-qc-mcp/dist/index.js"]
}
}
}Tip: Use the full absolute path to
nodeif VSCode doesn't pick up your nvm version, e.g./Users/you/.nvm/versions/node/v22.x.x/bin/node
Option C: Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{
"mcpServers": {
"tevi-qc": {
"command": "node",
"args": ["/absolute/path/to/tevi-qc-mcp/dist/index.js"]
}
}
}Note: No need to set
envin the config - the server reads credentials from the.envfile automatically.
Step 4: Restart and verify
- CLI: Restart Claude Code, then run
/mcp - VSCode:
Cmd+Shift+P→Reload Window, then check MCP status - Desktop: Restart the app
You should see tevi-qc listed with all tools.
Step 5: Setup skills (optional)
To install Claude Code skills for test case creation workflows:
cd your-project-directory
npx tevi-qc-setup-skillsThis creates .claude/skills/create-testcase/ in your project with conventions, templates, and examples.
Step 6: Install companion Feature Map MCP (strongly recommended)
When creating or updating test cases, Claude needs to know screen names, element IDs, and navigation flows. The @tevihq/feature-map-mcp companion server exposes this data directly as MCP tools — Claude queries it to pick real element IDs, verify screen names, and cross-check navigation.
Without it: Claude makes up placeholder element IDs, guesses screen names, and you spend review time correcting them. With it: Element IDs, screen names, and navigation come from source-of-truth — less back-and-forth.
Option A — Install via npx (no clone required)
Add to your MCP config (~/.claude/settings.json or .mcp.json):
{
"mcpServers": {
"tevi-qc": { "command": "node", "args": ["/absolute/path/to/tevi-qc-mcp/dist/index.js"] },
"tevi-feature-map": {
"command": "npx",
"args": ["-y", "@tevihq/feature-map-mcp"]
}
}
}
@tevihq/feature-map-mcpis a private npm package. You must be authenticated (npm loginwith an account that has access to the@tevihqscope) fornpxto resolve it. Verify withnpm whoami.
Option B — Clone and build (no npm auth needed)
git clone https://github.com/Tevi-Space/tevi-feature-map.git
cd tevi-feature-map
npm install
npm run buildThen point the MCP config at the built binary:
{
"mcpServers": {
"tevi-feature-map": {
"command": "node",
"args": ["/absolute/path/to/tevi-feature-map/dist/index.js"]
}
}
}Option B is preferred if you want to pin to a specific revision or contribute screens/elements back to the repo.
Verify installation
Restart Claude Code → run /mcp → confirm tevi-feature-map is listed. Available tools:
| Tool | Purpose |
|------|---------|
| list_screens | All screens across app + web |
| get_screen | Detail of a specific screen (elements, screenshots, navigation) |
| search_elements | Find elements by name / resource ID / type across all screens |
| get_feature_summary | Screens/elements aggregated per feature |
| get_navigation_flow | BFS path between two screens |
Features
- Test Case Management - Create, read, update, delete, and restore test cases with full lifecycle
- Test Run Orchestration - Group test cases into runs, track results, send Slack notifications
- Element Registry (POM) - Register and manage UI elements following Page Object Model conventions
- Test Data Registry - Store and manage test data across environments (dev, staging, production)
- Analytics - Coverage analysis, treemap visualization, risk matrix, quality metrics
- Multi-platform - Android, iOS, Web with platform-specific handling
- Soft Delete & Restore - Safe deletion with recovery support
- Claude Code Skills - Built-in skills for test case creation, review, and automation workflows
Available Tools
Test Case Management
| Tool | Description |
|------|-------------|
| list_tags | Get all available tags with their test case counts |
| list_test_cases | List/search test cases with filters (tags, platform, status, priority) |
| get_test_case | Get detailed test case by ID |
| create_test_case | Create new test case with steps, elements, and metadata |
| update_test_case | Update existing test case fields |
| delete_test_case | Soft delete a test case |
| restore_test_case | Restore a previously deleted test case (admin) |
Test Run Management
| Tool | Description |
|------|-------------|
| list_test_runs | List test runs with filters (platform, status, creator) |
| get_test_run | Get test run details with all cases and results |
| create_test_run | Create new test run from selected test cases |
| update_test_run | Update test run (name, test cases, note) |
| delete_test_run | Soft delete a test run |
| restore_test_run | Restore a previously deleted test run (admin) |
Test Execution
| Tool | Description |
|------|-------------|
| update_test_case_status | Mark test case as passed/failed (auto or manual) in a run; optional failed_page_source (XML/HTML at failure) auto-uploaded to R2 |
| get_test_case_in_run | Get test case details and history within a test run |
Element Registry (Page Object Model)
| Tool | Description |
|------|-------------|
| list_elements | List UI elements with filters (page, section, type) |
| list_element_pages | Get all pages in Element Registry with counts |
| get_element | Get element details by ID |
| get_element_usage | Find which test cases use a specific element |
| create_elements | Create one or more elements (batch support) |
| update_element | Update element description |
| delete_element | Delete element from registry |
Test Data Registry
| Tool | Description |
|------|-------------|
| list_test_data | List test data entries (filter by category, environment, tags) |
| get_test_data | Get test data entry by ID |
| create_test_data | Create new test data entry |
| update_test_data | Update existing test data entry |
| delete_test_data | Delete test data entry |
Notifications & Analytics
| Tool | Description |
|------|-------------|
| send_slack_summary | Send test run summary to Slack thread |
| get_analysis | Get aggregated analytics (coverage, risk matrix, heatmap, trends) |
| get_treemap | Get feature coverage treemap from TC name hierarchy |
Configuration
Environment Variables
| Variable | Required | Description | Default |
|----------|----------|-------------|---------|
| TEVI_QC_API_KEY | Yes (for write ops) | Your API key from https://qc.tevi.dev | - |
| TEVI_QC_API_URL | No | API base URL | https://qc.tevi.dev/api |
| TEVI_QC_UI_URL | No | Frontend URL (for generating links) | https://qc.tevi.dev |
Configuration is resolved in this order (first found wins):
- Environment variables (set via
mcpServers.envin Claude Code settings) .envfile in the project root- Default values
Auto Version Check
The server includes a pre-tool hook that checks for updates before each tool call. If the local repo is behind origin/main, tools will be blocked until you update:
cd /path/to/tevi-qc-mcp
git pull
npm run buildUsage Examples
Once configured, you can ask Claude:
- "List all test cases with tag regression"
- "Show me test run #123"
- "Create a test run for Android with TC001, TC002, TC003"
- "Mark TC001 as passed in test run #123"
- "Mark TC001 as Failed by Auto in test run #123 with this page source:
<XML>" — uploads XML to R2 + records URL on the run - "What are the failed test cases in the latest test run?"
- "Create a test case for verifying user can flip camera during live"
- "List all elements on the login page"
- "Show me the test coverage analysis for the live feature"
- "Create test data for staging account credentials"
Claude Code Skills
This repository includes Claude Code skills in .claude/skills/ for structured QC workflows.
Available Skills
| Skill | Description |
|-------|-------------|
| create-testcase | Create test cases following Tevi QC conventions with archetype-based design |
| update-testcase | Update existing test cases with validation |
| review-testcase | Validate test cases for format, conventions, and navigation flows |
| analyze-testrun | Analyze test run results and provide insights |
| create-testrun | Create test runs with filtered test cases |
| create-element-task | Analyze screenshots, register elements, create Linear tasks for accessibility IDs |
| map-elements | Map UI elements from Figma designs to test case steps |
| create-auto-task | Create Linear automation tickets from "Ready To Auto" test cases |
| github | Guidelines for GitHub PRs and Linear ticket linking |
Skill Files Structure
.claude/
├── skills/
│ ├── create-testcase/
│ │ ├── SKILL.md
│ │ ├── reference.md
│ │ ├── examples.md
│ │ ├── templates/
│ │ │ └── testcase.md
│ │ ├── archetypes/
│ │ │ ├── ui-display.md
│ │ │ ├── crud-feature.md
│ │ │ ├── user-action.md
│ │ │ ├── data-flow.md
│ │ │ ├── payment.md
│ │ │ ├── auth.md
│ │ │ └── localization.md
│ │ └── decision-rules/
│ │ ├── README.md
│ │ ├── archetype-selector.md
│ │ └── coverage-checklist.md
│ ├── update-testcase/
│ ├── review-testcase/
│ ├── analyze-testrun/
│ ├── create-testrun/
│ ├── create-element-task/
│ ├── map-elements/
│ ├── create-auto-task/
│ └── github/Test Case Conventions
Naming Convention
[Feature][Sub-feature][Action] DescriptionExamples:
[Live][Host][Camera] Verify user flip camera successfully[Messages][Receiver][Post] Receive message a post[Post][Interaction Fee][React][Reply] Verify user can react and reply
Step Format
step: [Screen] Action description
expected: [Screen][Verify] Verification descriptionElement Mapping
Test case steps support element mapping for automation:
{
"step": "[Home] Tap \"Camera\" icon",
"expected": "[Live][Verify] This screen is displayed",
"step_elements": ["home_header_btn_camera"],
"expected_elements": ["live_header_txt_title"]
}Element IDs follow POM naming: {page}_{section}_{type}_{name}
Types: btn, inp, txt, lbl, img, lnk, chk, rad, sel, mod, ico, tog, tab
For complete documentation, see CLAUDE.md.
Development
# Install dependencies
npm install
# Build
npm run build
# Watch mode (rebuild on changes)
npm run dev
# Test with MCP Inspector
npx @anthropic-ai/mcp-inspector node dist/index.js