npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@democratize-quality/mcp-server

v1.2.1

Published

MCP Server for democratizing quality through browser automation and comprehensive API testing capabilities

Readme

🎯 Democratize Quality MCP Server

Intelligent API Testing Made Simple - A comprehensive Model Context Protocol (MCP) server that brings professional-grade API testing capabilities to everyone, from QA engineers to developers.

Transform your API testing workflow with AI-powered agents that plan, generate, and heal your tests automatically.


📦 Quick Installation

Option 1: Skills Installation (Recommended)

The fastest way to get started - Install with intelligent testing skills that work across multiple AI coding tools:

npx @democratize-quality/mcp-server@latest --agents

What this does:

  • ✅ Installs the MCP server
  • ✅ Sets up 4 AI-powered testing skills
  • ✅ Configures VS Code integration automatically
  • ✅ Creates project folders (.agents/skills/, .github/skills/, .claude/skills/, .vscode/)
  • ✅ Works with: GitHub Copilot, Codex CLI, Claude Code, Cursor, and 10+ other tools

🎥 Video Tutorial

Watch: API Test Skills Walkthrough

▶️ Watch the complete walkthrough - Learn how to use the API Test Skills to build comprehensive test coverage for REST and GraphQL APIs.

One-Click Alternative:

Prerequisites:

  • Node.js 14+ installed
  • One of these AI coding tools:
    • GitHub Copilot (VS Code or CLI)
    • Codex CLI (OpenAI)
    • Claude Code (Anthropic)
    • Cursor, Roo Code, or other Agent Skills-compatible tools
  • Active internet connection

Option 2: Manual Installation

For Claude Desktop or other MCP clients:

Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

{
  "mcpServers": {
    "democratize-quality": {
      "command": "npx",
      "args": ["@democratize-quality/mcp-server"],
      "env": {
        "NODE_ENV": "production",
        "OUTPUT_DIR": "./api-test-reports"
      }
    }
  }
}

Global installation option:

npm install -g @democratize-quality/mcp-server

# Then use anywhere
democratize-quality-mcp --help

CLI Integration Options

Using Codex CLI Commands

Add the MCP server using the Codex CLI:

codex mcp add democratize-quality -- npx @democratize-quality/mcp-server

With environment variables:

codex mcp add democratize-quality \
  --env NODE_ENV=production \
  --env OUTPUT_DIR=./api-test-reports \
  -- npx @democratize-quality/mcp-server

Using config.toml

Alternatively, edit ~/.codex/config.toml (or .codex/config.toml for project-scoped configuration):

[mcp_servers.democratize-quality]
command = "npx"
args = ["@democratize-quality/mcp-server"]

[mcp_servers.democratize-quality.env]
NODE_ENV = "production"
OUTPUT_DIR = "./api-test-reports"

Verify Installation

In the Codex TUI, use /mcp to see your active MCP servers.

📚 Learn more: Codex MCP Documentation

Using Copilot CLI Commands

Add the MCP server using the Copilot CLI:

/mcp add democratize-quality stdio npx @democratize-quality/mcp-server

Using .copilot/mcp-config.json

For project-scoped configuration, create .copilot/mcp-config.json in your project root:

{
  "mcpServers": {
    "democratize-quality": {
      "type": "local",
      "command": "npx",
      "args": ["@democratize-quality/mcp-server"],
      "tools": ["*"],
      "env": {
        "NODE_ENV": "production",
        "OUTPUT_DIR": "./api-test-reports"
      }
    }
  }
}

Exclude Unnecessary Files

Add to your .gitignore:

.copilot/logs/
.copilot/config.json

For DevContainers (Optional)

Configure the XDG_CONFIG_HOME environment variable to use project-scoped configuration:

# In postCreateCommand.sh or similar
GH_CLI_CONFIG_DIR="/workspaces/your-repo"

if ! grep -q 'export XDG_CONFIG_HOME=' ~/.zshrc; then
    echo "export XDG_CONFIG_HOME=\"$GH_CLI_CONFIG_DIR\"" >> ~/.zshrc
fi

Verify Installation

Use /mcp show to confirm the server is configured:

/mcp show

📚 Learn more: Managing GitHub Copilot CLI MCP Server Configuration

Using Claude Code CLI Commands

Add the MCP server using the Claude Code CLI:

claude mcp add --transport stdio democratize-quality -- npx @democratize-quality/mcp-server

With environment variables:

claude mcp add --transport stdio \
  --env NODE_ENV=production \
  --env OUTPUT_DIR=./api-test-reports \
  democratize-quality -- npx @democratize-quality/mcp-server

Using .mcp.json (Project Scope)

Create .mcp.json in your project root for team-shared configuration:

claude mcp add --scope project --transport stdio democratize-quality -- npx @democratize-quality/mcp-server

This creates a .mcp.json file:

{
  "mcpServers": {
    "democratize-quality": {
      "command": "npx",
      "args": ["@democratize-quality/mcp-server"],
      "env": {
        "NODE_ENV": "production",
        "OUTPUT_DIR": "./api-test-reports"
      }
    }
  }
}

Configuration Scopes

  • Local scope (default): --scope local - Personal configuration stored in ~/.claude.json
  • Project scope: --scope project - Shared configuration in .mcp.json (checked into version control)
  • User scope: --scope user - Available across all your projects in ~/.claude.json

Managing Your Servers

# List all configured servers
claude mcp list

# Get details for a specific server
claude mcp get democratize-quality

# Remove a server
claude mcp remove democratize-quality

# Check server status in Claude Code
/mcp

Verify Installation

In Claude Code, use /mcp to see your active MCP servers and authenticate if needed.

📚 Learn more: Claude Code MCP Documentation


🛠️ Available Tools

This MCP server provides 7 powerful tools for comprehensive API testing:

🤖 AI-Powered Testing Tools

| Tool | Purpose | When to Use | |------|---------|-------------| | api_planner | Analyzes API schemas (OpenAPI/Swagger, GraphQL) and creates comprehensive test plans with realistic sample data | Starting a new API testing project, documenting API behavior, validating API endpoints | | api_generator | Generates executable tests (Jest, Playwright, Postman) from test plans using AI-powered code generation | Converting test plans to runnable code in TypeScript or JavaScript | | api_healer | Debugs and automatically fixes failing tests by analyzing errors and applying healing strategies | Tests break after API changes, schema updates, or authentication issues | | api_project_setup | Detects project configuration (framework and language) for smart test generation | Before using api_generator - auto-detects Playwright/Jest and TypeScript/JavaScript |

🔧 Core API Testing Tools

| Tool | Purpose | When to Use | |------|---------|-------------| | api_request | Executes HTTP requests with validation and request chaining | Making individual API calls, testing specific endpoints, chaining multiple requests | | api_session_status | Queries test session status and logs | Checking progress of test sequences, viewing request history | | api_session_report | Generates comprehensive HTML test reports | Creating shareable test documentation with detailed analytics |

Key Capabilities:

  • Smart Validation: Automatic "expected vs actual" comparison with detailed failure messages
  • 🔗 Request Chaining: Extract data from one response and use in subsequent requests
  • 📊 Session Tracking: Monitor multi-step API workflows across multiple requests
  • 📈 Visual Reports: Interactive HTML reports with timing analysis and validation results
  • 🎯 All HTTP Methods: GET, POST, PUT, DELETE, PATCH, OPTIONS, HEAD
  • 🔒 Authentication: Support for Bearer tokens, API keys, custom headers
  • 🎨 Realistic Sample Data: Auto-generates context-aware test data (names, emails, dates, etc.)
  • 🔍 Optional Validation: Validate endpoints by making actual API calls with response time metrics

🌐 Agent Skills (Universal AI Capabilities)

When you install with --agents, you get 4 intelligent testing skills that work across all major AI coding tools:

1. 📋 /api-planning - The Strategist

What it does: Analyzes your API and creates comprehensive test strategies

Best for:

  • Starting new testing projects
  • Documenting API behavior
  • Creating test scenarios for complex workflows
  • Security and edge case planning

Example: Analyze a Real API

1. In your AI coding tool, type /api-planning or mention API planning in your prompt

2. Analyze this API and create a comprehensive test plan: 
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json

3. Skill: I'll call the API planner once to analyze the OpenAPI schema at the provided URL and generate a comprehensive test plan with realistic sample data plus live validation of 5 endpoints; expected outcome: a saved test plan in ./api-test-reports/fakerest-test-plan.md and a validation summary. I'll run the planner now. 

4. When prompted, allow the tool call "Run api_planner democratize-quality (MCP Server)"

What you get:

  • Comprehensive markdown test plan with 40+ test scenarios
  • Realistic sample requests for each endpoint
  • Expected response structures
  • Error scenarios and edge cases
  • Ready for test generation

2. 🔧 /test-generation - The Builder

What it does: Converts test plans into executable code

Best for:

  • Generating Jest test suites
  • Creating Playwright automation scripts
  • Building Postman collections

Example: Generate Tests from Plan

1. Once you have the test plan, type /test-generation or mention test generation
   Open the test plan MD file (so it's in context) and enter:
   Create Playwright tests in TypeScript for the Books API section

2. Skill: I'm going to detect the project's test framework/language (required first step) so I can generate Playwright TypeScript tests in the correct layout; expected outcome: detection result with suggested framework/language. Then ask you to allow the another tool "Run api_project_setup democratize-quality (MCP Server)" for capturing the project info e.g. if you have an empty project then it will ask to select the framework/language otherwise if your project is playwright typescript it will create playwright test(s) accordingly.

3. After you answer, it invokes tool "Run api_generator democratize-quality (MCP Server)" which will actually generate your tests, click "Allow"

Created files:
✅ With Books CRUD operations
✅ Show on screen instructions for running the tests

Tests include:
- ✅ GET /api/v1/Books - List all books
- ✅ GET /api/v1/Books/{id} - Get book by ID
- ✅ POST /api/v1/Books - Create new book
- ✅ PUT /api/v1/Books/{id} - Update book
- ✅ DELETE /api/v1/Books/{id} - Delete book
- ✅ Error handling for invalid IDs
- ✅ Validation for required fields

Example: Generate Postman Collection

You: /test-generation create a Postman collection from the entire test plan

Skill: Creating Postman collection for Fake REST API...

Generated:
✅ postman/FakeRestAPI.postman_collection.json
✅ postman/environment.json (with base URL variables)

Sample Collection includes:
- 📁 Books (5 requests)
- 📁 Users (4 requests)
- 📁 Authors (4 requests)
- 📁 Activities (3 requests)
- 📁 CoverPhotos (4 requests)

Import into Postman and start testing!

3. ⚡ /test-execution - The Runner

What it does: Executes API tests directly from test plans without code

Best for:

  • Running tests immediately after planning
  • Quick API validation without writing code
  • Executing specific test sections
  • Generating comprehensive HTML test reports

Example: Execute Tests from Plan

1. After creating a test plan, type /test-execution or mention test execution
   
You: Execute tests from section 1 of the test plan

2. Skill: I'll execute the tests from section 1 of your test plan using the api_request tool with a unique session ID. After execution, I'll generate an HTML report with all results.

3. When prompted, allow the tool call "Run api_request democratize-quality (MCP Server)"

Execution Results:
✅ GET /api/v1/Books - List all books (200 OK, 145ms)
✅ GET /api/v1/Books/1 - Get book by ID (200 OK, 98ms)
✅ POST /api/v1/Books - Create new book (201 Created, 203ms)
✅ PUT /api/v1/Books/1 - Update book (200 OK, 156ms)
✅ DELETE /api/v1/Books/1 - Delete book (204 No Content, 87ms)

Test Summary:
- Total: 5 tests
- Passed: 5 (100%)
- Failed: 0 (0%)
- Total Time: 689ms

📊 HTML Report: ./api-test-reports/test-execution-1739597400000-report.html

Example: Execute Full Test Plan

You: /test-execution run all tests from the test plan and generate a report

Skill: Executing entire test plan...

Progress:
📋 Section 1: Books API (5 tests)
  ✅ GET /api/v1/Books
  ✅ GET /api/v1/Books/{id}
  ✅ POST /api/v1/Books
  ✅ PUT /api/v1/Books/{id}
  ✅ DELETE /api/v1/Books/{id}

📋 Section 2: Authors API (4 tests)
  ✅ GET /api/v1/Authors
  ✅ GET /api/v1/Authors/{id}
  ✅ POST /api/v1/Authors
  ✅ DELETE /api/v1/Authors/{id}

📋 Section 3: Users API (4 tests)  
  ✅ GET /api/v1/Users
  ✅ POST /api/v1/Users
  ❌ PUT /api/v1/Users/{id} - 404 Not Found
  ✅ DELETE /api/v1/Users/{id}

Final Summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total: 13 tests
Passed: 12 (92.3%)
Failed: 1 (7.7%)
Total Time: 1.8s
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

📊 Comprehensive HTML Report Generated:
./api-test-reports/full-plan-execution-report.html

Open in browser for:
- Detailed request/response logs
- Timing analysis and charts
- Visual pass/fail indicators
- Validation results

Example: Execute with CRUD Workflow

You: /test-execution run the Books CRUD tests with request chaining

Skill: Executing CRUD workflow with request chaining...

Step 1: CREATE book
  ✅ POST /api/v1/Books
  📌 Extracted: bookId = 42

Step 2: READ created book
  ✅ GET /api/v1/Books/42
  📌 Verified: Book exists with correct data

Step 3: UPDATE book
  ✅ PUT /api/v1/Books/42
  📌 Verified: Book updated successfully

Step 4: DELETE book
  ✅ DELETE /api/v1/Books/42
  📌 Verified: Book deleted (204 No Content)

Workflow Complete: All 4 operations successful! ✅

📊 Report: ./api-test-reports/crud-workflow-report.html

4. 🔬 /test-healing - The Fixer

What it does: Automatically diagnoses and repairs failing tests

Best for:

  • Fixing broken tests after API changes
  • Updating endpoint URLs
  • Resolving authentication issues
  • Schema mismatch corrections

Example: Fix Failing Tests

You: /test-healing the Books tests are failing with 404 errors. 
Please analyze and fix them.

Skill: Analyzing test failures in tests/books.spec.ts...

Issues found:
❌ Endpoint changed: /api/Books → /api/v1/Books
❌ Response schema updated: added "pageCount" field
❌ ID format changed: number → string

Applying fixes:
✅ Updated all endpoint URLs to include /v1/ prefix
✅ Updated response assertions to include pageCount
✅ Fixed ID type in test data (42 → "42")
✅ Backed up original file to tests/books.spec.ts.backup

Re-running tests... ✅ All 15 tests now passing!

Example: Fix Schema Mismatches

You: /test-healing fix assertion errors in tests/users.spec.ts

Skill: Analyzing assertion failures...

Found 3 schema mismatches:
❌ Expected userName, API returns username (camelCase issue)
❌ Password field removed from response (security update)
❌ New field added: profileImageUrl

Healing actions:
✅ Updated assertions: userName → username
✅ Removed password field validation
✅ Added profileImageUrl to expected response
✅ Updated TypeScript interfaces

Tests healed successfully! ✅ 12/12 passing

💡 Usage Examples - E2E Testing Scenarios

Scenario 1: Testing the Fake REST API (Complete Workflow)

Real-world example using: https://fakerestapi.azurewebsites.net

Step 1: Plan your tests

/api-planning analyze the OpenAPI spec at 
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json 
and create a comprehensive test plan focusing on Books and Authors endpoints

What happens:

  • ✅ Skill analyzes the Fake REST API schema
  • ✅ Identifies 5 resource types (Books, Authors, Users, Activities, CoverPhotos)
  • ✅ Creates 40+ test scenarios covering:
    • Happy paths (GET all books, GET book by ID, CREATE book)
    • Error cases (404 for invalid IDs, 400 for bad data)
    • Edge cases (empty lists, ID boundaries)
    • Data validation (response schema checks)
  • ✅ Generates realistic sample data:
    {
      "id": 1,
      "title": "The Great Gatsby",
      "description": "A classic American novel",
      "pageCount": 180,
      "excerpt": "In my younger and more vulnerable years...",
      "publishDate": "1925-04-10T00:00:00Z"
    }

Step 2: Generate executable tests

/test-generation create Playwright tests in TypeScript from the test plan, 
focusing on the Books API section

What you get:

  • tests/books.spec.ts - Complete Books CRUD test suite
    import { test, expect } from '@playwright/test';
      
    test.describe('Books API', () => {
      test('GET /api/v1/Books - should return all books', async ({ request }) => {
        const response = await request.get('https://fakerestapi.azurewebsites.net/api/v1/Books');
        expect(response.ok()).toBeTruthy();
        expect(response.status()).toBe(200);
        const books = await response.json();
        expect(Array.isArray(books)).toBeTruthy();
      });
        
      test('POST /api/v1/Books - should create a new book', async ({ request }) => {
        const newBook = {
          id: 201,
          title: "Test Book",
          pageCount: 350
        };
        const response = await request.post('https://fakerestapi.azurewebsites.net/api/v1/Books', {
          data: newBook
        });
        expect(response.status()).toBe(200);
      });
    });

Step 3: Run and heal tests

# Run tests
npx playwright test tests/books.spec.ts

# If any tests fail due to API changes:
/test-healing fix the failing tests in tests/books.spec.ts

What healer fixes:

  • ✅ Endpoint URL updates (if API version changes)
  • ✅ Response schema corrections (new/removed fields)
  • ✅ Data type adjustments (string vs number IDs)
  • ✅ Status code updates

Scenario 2: Quick API Documentation with Postman

Goal: Generate a ready-to-use Postman collection for the Fake REST API

Step 1:
/api-planning create test plan from 
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json
with validation enabled to test actual endpoints

Step 2:
/test-generation create Postman collection from the test plan 
with all endpoints and example requests

Step 3: 
Import the generated collection into Postman and start testing!

What you get:

  • 📦 postman/FakeRestAPI.postman_collection.json - Complete collection
  • 🌍 postman/environment.json - Environment variables
  • ✅ 20+ pre-configured requests across all resource types
  • 📝 Example request bodies with realistic data
  • ✔️ Response validation tests included

Result: Professional Postman collection ready to share with your team

Scenario 3: Quick API Validation Without Code

Goal: Execute API tests immediately without generating code files

Step 1:
/api-planning create test plan from 
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json
focusing on Books and Users endpoints

Step 2:
/test-execution run all tests from the test plan and generate HTML report

Step 3:
Open the generated HTML report to see detailed results!

What you get:

  • ⚡ Instant test execution without writing code
  • 📊 Interactive HTML report with:
    • Pass/fail indicators for each endpoint
    • Request/response details
    • Response timing analysis
    • Visual charts and statistics
  • ✅ Quick validation of API before writing automated tests
  • 🔄 Easy to re-run for regression testing

Use cases:

  • Quick API health checks
  • Validating API after deployments
  • API exploration and documentation
  • Testing APIs before committing to test code
  • Sharing test results with non-technical stakeholders

Alternative workflow with specific sections:

You: /test-execution execute section 1 (Books API) from the test plan

Result:
✅ 5/5 tests passed
📊 Report: ./api-test-reports/books-section-report.html

🔧 Calling Individual Tools

When to Use Individual Tools

Use skills (/api-planning, /test-generation, /test-execution, /test-healing) when:

  • You want AI assistance and recommendations
  • You're working on complex, multi-step workflows
  • You need explanations and best practices

Use individual tools when:**

  • You need precise control over parameters
  • You're automating tests in CI/CD
  • You're integrating with other tools
  • You want to script repetitive tasks

Tool 1: api_planner

Purpose: Analyze API schemas and generate comprehensive test plans with realistic sample data

Basic usage:

// In Claude Desktop or MCP client
{
  "tool": "api_planner",
  "parameters": {
    "schemaUrl": "https://api.example.com/swagger.json",
    "schemaType": "openapi",
    "outputPath": "./test-plan.md"
  }
}

With endpoint validation:

{
  "tool": "api_planner",
  "parameters": {
    "schemaUrl": "https://petstore3.swagger.io/api/v3/openapi.json",
    "schemaType": "openapi",
    "outputPath": "./api-test-plan.md",
    "includeAuth": true,
    "includeSecurity": true,
    "includeErrorHandling": true,
    "testCategories": ["functional", "security", "edge-cases"],
    "validateEndpoints": true,
    "validationSampleSize": 3,
    "validationTimeout": 5000
  }
}

Parameters explained:

  • schemaUrl - URL to fetch API schema (OpenAPI/Swagger, GraphQL introspection endpoint)
  • schemaContent - Direct schema content as JSON/YAML string (alternative to schemaUrl)
  • schemaType - Type of schema: openapi, swagger, graphql, auto (default: auto)
  • apiBaseUrl - Base URL of the API (overrides schema baseUrl if provided)
  • includeAuth - Include authentication testing scenarios (default: true)
  • includeSecurity - Include security testing scenarios (default: true)
  • includeErrorHandling - Include error handling scenarios (default: true)
  • outputPath - File path to save test plan (default: ./api-test-plan.md)
  • testCategories - Types of tests: functional, security, performance, integration, edge-cases
  • validateEndpoints - Make actual API calls to validate endpoints (default: false)
  • validationSampleSize - Number of endpoints to validate, use -1 for all (default: 3)
  • validationTimeout - Timeout per validation request in ms (default: 5000)

Tool 2: api_generator

Purpose: Generate executable tests (Playwright, Jest, Postman) from test plans using AI

Generate Playwright tests:

{
  "tool": "api_generator",
  "parameters": {
    "testPlanPath": "./test-plan.md",
    "outputFormat": "playwright",
    "outputDir": "./tests",
    "includeSetup": true,
    "language": "typescript"
  }
}

Generate Jest tests:

{
  "tool": "api_generator",
  "parameters": {
    "testPlanPath": "./test-plan.md",
    "outputFormat": "jest",
    "outputDir": "./tests",
    "testFramework": "jest",
    "baseUrl": "https://api.example.com"
  }
}

Generate Postman collection:

{
  "tool": "api_generator",
  "parameters": {
    "testPlanPath": "./test-plan.md",
    "outputFormat": "postman",
    "outputDir": "./postman",
    "includeAuth": true
  }
}

Generate all formats:

{
  "tool": "api_generator",
  "parameters": {
    "testPlanPath": "./test-plan.md",
    "outputFormat": "all",
    "outputDir": "./tests",
    "language": "typescript"
  }
}

Parameters explained:

  • testPlanPath - Path to test plan markdown file
  • testPlanContent - Direct test plan content as markdown (alternative to testPlanPath)
  • outputFormat - Format: playwright, postman, jest, all (default: all)
  • outputDir - Directory to save generated tests (default: ./tests)
  • sessionId - Session ID for tracking generated tests
  • includeAuth - Include authentication setup (default: true)
  • includeSetup - Include test setup/teardown code (default: true)
  • testFramework - Framework: jest, mocha, playwright-test
  • baseUrl - Base URL for API (optional)
  • language - Language: javascript, typescript (default: typescript)

Tool 3: api_healer

Purpose: Debug and automatically fix failing API tests

Fix specific test file:

{
  "tool": "api_healer",
  "parameters": {
    "testPath": "./tests/auth.test.js",
    "testType": "auto",
    "autoFix": true,
    "backupOriginal": true
  }
}

Fix multiple test files:

{
  "tool": "api_healer",
  "parameters": {
    "testFiles": ["./tests/auth.test.js", "./tests/users.test.js"],
    "testType": "playwright",
    "healingStrategies": ["schema-update", "endpoint-fix", "auth-repair"]
  }
}

Analysis only (no fixes):

{
  "tool": "api_healer",
  "parameters": {
    "testPath": "./tests/api.test.js",
    "analysisOnly": true
  }
}

Parameters explained:

  • testPath - Path to a specific test file to heal
  • testFiles - Array of test file paths (alternative to testPath)
  • testType - Type: jest, playwright, postman, auto (default: auto)
  • sessionId - Session ID for tracking healing process
  • maxHealingAttempts - Max attempts per test (default: 3)
  • autoFix - Automatically apply fixes (default: true)
  • backupOriginal - Create backup files (default: true)
  • analysisOnly - Only analyze without fixing (default: false)
  • healingStrategies - Specific strategies: schema-update, endpoint-fix, auth-repair, data-correction, assertion-update

Tool 4: api_project_setup

Purpose: Detect project configuration for test generation

Detect project configuration:

{
  "tool": "api_project_setup",
  "parameters": {
    "outputDir": "./tests"
  }
}

What it does:

  • Scans for playwright.config.ts/js, jest.config.ts/js, tsconfig.json
  • Auto-detects framework (Playwright/Jest) and language (TypeScript/JavaScript)
  • Returns configuration or prompts user if ambiguous
  • Must be called BEFORE api_generator for optimal test generation

Response (when auto-detected):

{
  "success": true,
  "autoDetected": true,
  "config": {
    "framework": "playwright",
    "language": "typescript",
    "hasTypeScript": true,
    "hasPlaywrightConfig": true,
    "configFiles": ["playwright.config.ts", "tsconfig.json"]
  },
  "nextStep": "Call api_generator with outputFormat: 'playwright' and language: 'typescript'"
}

Response (when user input needed):

{
  "success": true,
  "needsUserInput": true,
  "prompts": [
    {
      "name": "framework",
      "question": "Which test framework would you like to use?",
      "choices": ["playwright", "jest", "postman", "all"]
    },
    {
      "name": "language",
      "question": "Which language would you like to use?",
      "choices": ["typescript", "javascript"]
    }
  ]
}

Parameters:

  • outputDir - Directory for tests (default: ./tests). Used to locate project root
  • promptUser - Force user prompts even if config detected (default: false)

Tool 5: api_request

Purpose: Execute individual HTTP requests with validation

Simple GET request:

{
  "tool": "api_request",
  "parameters": {
    "method": "GET",
    "url": "https://api.example.com/users/1",
    "expect": {
      "status": 200,
      "contentType": "application/json"
    }
  }
}

POST with authentication:

{
  "tool": "api_request",
  "parameters": {
    "method": "POST",
    "url": "https://api.example.com/users",
    "headers": {
      "Authorization": "Bearer your-token-here",
      "Content-Type": "application/json"
    },
    "data": {
      "name": "John Doe",
      "email": "[email protected]"
    },
    "expect": {
      "status": 201,
      "body": {
        "name": "John Doe",
        "email": "[email protected]"
      }
    }
  }
}

Request chaining (use response in next request):

{
  "tool": "api_request",
  "parameters": {
    "sessionId": "user-workflow",
    "chain": [
      {
        "name": "create_user",
        "method": "POST",
        "url": "https://api.example.com/users",
        "data": { "name": "Jane Doe" },
        "extract": { "userId": "id" }
      },
      {
        "name": "get_user",
        "method": "GET",
        "url": "https://api.example.com/users/{{ create_user.userId }}",
        "expect": { "status": 200 }
      }
    ]
  }
}

Parameters explained:

  • method - HTTP method (GET, POST, PUT, DELETE, PATCH, etc.)
  • url - Target endpoint
  • headers - Custom headers (authentication, content-type)
  • data - Request body for POST/PUT/PATCH
  • expect - Validation rules (status, headers, body)
  • sessionId - Group related requests
  • chain - Execute multiple requests in sequence

Tool 6: api_session_status

Purpose: Check status of API test sessions

Check session progress:

{
  "tool": "api_session_status",
  "parameters": {
    "sessionId": "user-workflow"
  }
}

Response example:

{
  "sessionId": "user-workflow",
  "totalRequests": 5,
  "successfulRequests": 4,
  "failedRequests": 1,
  "status": "completed",
  "startTime": "2025-10-23T10:00:00Z",
  "endTime": "2025-10-23T10:05:00Z",
  "duration": "5 minutes"
}

Tool 7: api_session_report

Purpose: Generate comprehensive HTML test reports

Generate report:

{
  "tool": "api_session_report",
  "parameters": {
    "sessionId": "user-workflow",
    "outputPath": "./reports/user-workflow-report.html",
    "includeCharts": true
  }
}

What you get:

  • 📊 Visual charts (success rate, response times)
  • 📋 Request/response details
  • ✅ Validation results (pass/fail with diffs)
  • 🕐 Timing information
  • 📸 Screenshots (if applicable)


⚙️ Configuration

Environment Variables

NODE_ENV=production              # Run in production mode (default)
OUTPUT_DIR=./api-test-reports    # Where to save reports (default)
MCP_FEATURES_ENABLEDEBUGMODE=true # Enable detailed logging

Command-Line Options

npx @democratize-quality/mcp-server [options]

Options:
  --help, -h        Show help information
  --version, -v     Display version number
  --agents          Install AI testing agents for GitHub Copilot
  --debug           Enable debug mode with detailed logs
  --verbose         Show detailed installation output
  --port <number>   Specify server port (if needed)

VS Code MCP Configuration

After running --agents, this file is created at .vscode/mcp.json:

{
  "servers": {
    "democratize-quality": {
      "type": "stdio",
      "command": "npx",
      "args": ["@democratize-quality/mcp-server"],
      "cwd": "${workspaceFolder}",
      "env": {
        "NODE_ENV": "production",
        "OUTPUT_DIR": "./api-test-reports"
      }
    }
  }
}

Output Directory Locations

  • VS Code/Local Projects: ./api-test-reports in your project
  • Claude Desktop: ~/.mcp-browser-control in your home directory
  • Custom Location: Set OUTPUT_DIR environment variable


� Troubleshooting

Agent Installation Issues

Solution:

  1. Restart VS Code completely
  2. Verify .vscode/mcp.json exists in your project root
  3. Check that GitHub Copilot extension is installed and active
  4. Reload the workspace: Cmd/Ctrl + Shift + P → "Developer: Reload Window"

Solution:

  1. Test the server: npx @democratize-quality/mcp-server --help
  2. Check .vscode/mcp.json configuration
  3. Try reinstalling skills: npx @democratize-quality/mcp-server --agents
  4. Ensure Node.js 14+ is installed

Solution:

  • Automatic backups are created: .vscode/mcp.json.backup.{timestamp}
  • The installer safely merges configurations
  • To restore: Copy backup file back to mcp.json

Solution:

  1. Check directory permissions for .github/ and .vscode/
  2. Run with elevated permissions if needed: sudo npx @democratize-quality/mcp-server --agents
  3. Ensure write access to project directory

Common Runtime Issues

Solution:

  • Check the configured OUTPUT_DIR environment variable
  • For Claude Desktop: Look in ~/.mcp-browser-control/
  • For VS Code: Look in ./api-test-reports/ in your project
  • Set custom location: OUTPUT_DIR=/your/custom/path

Solution:

  1. Test package availability: npx @democratize-quality/mcp-server --help
  2. Check Claude Desktop logs for detailed errors
  3. Verify configuration in claude_desktop_config.json
  4. Try running in terminal first to isolate issues

Solution:

  • Enhanced validation automatically shows "expected vs actual"
  • Check generated HTML reports for detailed comparisons
  • Enable debug mode: MCP_FEATURES_ENABLEDEBUGMODE=true
  • Review session logs with api_session_status tool

Getting Help

Enable debug mode for detailed logging:

# CLI
npx @democratize-quality/mcp-server --debug

# Environment variable
MCP_FEATURES_ENABLEDEBUGMODE=true node mcpServer.js

Log levels:

  • Production: Essential startup messages and errors only
  • Debug: Detailed request/response logs and validation details

� Additional Resources

Documentation

Quick Links

| Resource | Description | |----------|-------------| | GitHub Repository | Source code and issues | | Discussions | Community Q&A | | Issue Tracker | Bug reports and feature requests | | NPM Package | Package information |


🏗️ Architecture

Project Structure

democratize-quality-mcp-server/
├── src/
│   ├── tools/
│   │   ├── api/              # API testing tools
│   │   │   ├── api-planner.js
│   │   │   ├── api-generator.js
│   │   │   ├── api-healer.js
│   │   │   ├── api-project-setup.js
│   │   │   ├── api-request.js
│   │   │   ├── api-session-status.js
│   │   │   └── api-session-report.js
│   │   └── base/             # Tool framework
│   ├── skills/               # Agent Skills definitions
│   │   ├── api-planning/
│   │   │   └── SKILL.md
│   │   ├── test-generation/
│   │   │   └── SKILL.md
│   │   └── test-healing/
│   │       └── SKILL.md
│   ├── config/               # Configuration management
│   └── utils/                # Utility functions
├── docs/                     # Documentation
├── mcpServer.js             # Main MCP server
└── cli.js                   # Command-line interface

Key Features

  • 🔍 Automatic Tool Discovery: Tools are automatically loaded and registered
  • ⚙️ Configuration System: Environment-based config with sensible defaults
  • 🛡️ Error Handling: Comprehensive validation and detailed error reporting
  • 📊 Session Management: Track and manage multi-step API test workflows

🔒 Security Considerations

Security Posture

  • ✅ API-Only Mode: Enabled by default for secure deployments
  • ✅ Standard HTTP Libraries: All requests use trusted Node.js libraries
  • ✅ No File System Access: API tools only write to configured output directory
  • ✅ No Browser Automation: No browser processes in API-only mode

Production Deployment Best Practices

{
  "mcpServers": {
    "democratize-quality": {
      "command": "npx",
      "args": ["@democratize-quality/mcp-server"],
      "env": {
        "NODE_ENV": "production",
        "OUTPUT_DIR": "~/api-test-reports"
      }
    }
  }
}

Security Recommendations

  1. 📁 Secure Output Directory: Set appropriate permissions on report directories
  2. 🔄 Regular Updates: Keep the package updated for security patches
  3. 🌍 Environment Separation: Use different configs for dev vs production
  4. 📊 Monitoring: Enable debug mode during initial deployment to monitor usage
  5. � API Keys: Never commit API keys or tokens to version control
  6. 🌐 Network Security: Use HTTPS endpoints for production API testing

👨‍💻 Development & Contributing

Adding New Tools

  1. Create tool file in src/tools/api/
  2. Extend ToolBase class
  3. Define tool schema and implementation
  4. Tools are automatically discovered!

Example tool structure:

const ToolBase = require('../base/ToolBase');

class MyApiTool extends ToolBase {
  static definition = {
    name: "my_api_tool",
    description: "Performs custom API testing operations",
    input_schema: {
      type: "object",
      properties: {
        endpoint: { type: "string", description: "API endpoint URL" }
      },
      required: ["endpoint"]
    }
  };

  async execute(parameters) {
    // Implementation
    return {
      success: true,
      data: { /* results */ }
    };
  }
}

module.exports = MyApiTool;

Running Tests

# Run test suite
npm test

# Run MCP inspector for development
npm run inspector

# Start server in debug mode
npm run dev

Contributing Guidelines

  1. 🍴 Fork the repository
  2. 🌿 Create a feature branch (git checkout -b feature/amazing-feature)
  3. ✅ Add tests for your changes
  4. 📝 Update documentation
  5. 🔍 Ensure tests pass (npm test)
  6. 📤 Submit a pull request

What to include in PRs:

  • Clear description of changes
  • Test coverage for new features
  • Updated documentation
  • Examples of usage (if applicable)

License

This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).

Copyright (C) 2025 Democratize Quality

This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

If you modify this software and run it as a network service, you must make your modified source code available to users of that service under the same license.

See the LICENSE file for the full license text, or visit
https://www.gnu.org/licenses/agpl-3.0.html


�🙏 Acknowledgments

Built with the Model Context Protocol framework.

Special thanks to the MCP community and all contributors!


Ready to democratize quality through intelligent API testing! 🎯

Made with ❤️ by Raj Uppadhyay

⭐ Star on GitHub📦 View on NPM🐛 Report Bug💡 Request Feature