npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@democratize-quality/mcp-server

v1.1.6

Published

MCP Server for democratizing quality through browser automation and comprehensive API testing capabilities

Readme

🎯 Democratize Quality MCP Server

Intelligent API Testing Made Simple - A comprehensive Model Context Protocol (MCP) server that brings professional-grade API testing capabilities to everyone, from QA engineers to developers.

Transform your API testing workflow with AI-powered agents that plan, generate, and heal your tests automatically.


📦 Quick Installation

Option 1: Agent Mode Installation (Recommended)

The fastest way to get started - Install with intelligent testing agents for GitHub Copilot:

npx @democratize-quality/mcp-server@latest --agents

What this does:

  • ✅ Installs the MCP server
  • ✅ Sets up 3 AI-powered testing agents
  • ✅ Configures VS Code integration automatically
  • ✅ Creates project folders (.github/chatmodes/, .vscode/)

🎥 Video Tutorial

Watch: API Test Agents Walkthrough

▶️ Watch the complete walkthrough - Learn how to use the API Test Agents to build comprehensive test coverage for REST and GraphQL APIs.

One-Click Alternative:

Prerequisites:

  • Node.js 14+ installed
  • VS Code with GitHub Copilot extension (for agents)
  • Active internet connection

Option 2: Manual Installation

For Claude Desktop or other MCP clients:

Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

{
  "mcpServers": {
    "democratize-quality": {
      "command": "npx",
      "args": ["@democratize-quality/mcp-server"],
      "env": {
        "NODE_ENV": "production",
        "OUTPUT_DIR": "./api-test-reports"
      }
    }
  }
}

Global installation option:

npm install -g @democratize-quality/mcp-server

# Then use anywhere
democratize-quality-mcp --help

🛠️ Available Tools

This MCP server provides 7 powerful tools for comprehensive API testing:

🤖 AI-Powered Testing Tools

| Tool | Purpose | When to Use | |------|---------|-------------| | api_planner | Analyzes API schemas (OpenAPI/Swagger, GraphQL) and creates comprehensive test plans with realistic sample data | Starting a new API testing project, documenting API behavior, validating API endpoints | | api_generator | Generates executable tests (Jest, Playwright, Postman) from test plans using AI-powered code generation | Converting test plans to runnable code in TypeScript or JavaScript | | api_healer | Debugs and automatically fixes failing tests by analyzing errors and applying healing strategies | Tests break after API changes, schema updates, or authentication issues | | api_project_setup | Detects project configuration (framework and language) for smart test generation | Before using api_generator - auto-detects Playwright/Jest and TypeScript/JavaScript |

🔧 Core API Testing Tools

| Tool | Purpose | When to Use | |------|---------|-------------| | api_request | Executes HTTP requests with validation and request chaining | Making individual API calls, testing specific endpoints, chaining multiple requests | | api_session_status | Queries test session status and logs | Checking progress of test sequences, viewing request history | | api_session_report | Generates comprehensive HTML test reports | Creating shareable test documentation with detailed analytics |

Key Capabilities:

  • Smart Validation: Automatic "expected vs actual" comparison with detailed failure messages
  • 🔗 Request Chaining: Extract data from one response and use in subsequent requests
  • 📊 Session Tracking: Monitor multi-step API workflows across multiple requests
  • 📈 Visual Reports: Interactive HTML reports with timing analysis and validation results
  • 🎯 All HTTP Methods: GET, POST, PUT, DELETE, PATCH, OPTIONS, HEAD
  • 🔒 Authentication: Support for Bearer tokens, API keys, custom headers
  • 🎨 Realistic Sample Data: Auto-generates context-aware test data (names, emails, dates, etc.)
  • 🔍 Optional Validation: Validate endpoints by making actual API calls with response time metrics

🌐 Chat Modes (AI Agents)

When you install with --agents, you get 3 intelligent testing assistants:

1. 🌐 @api-planner - The Strategist

What it does: Analyzes your API and creates comprehensive test strategies

Best for:

  • Starting new testing projects
  • Documenting API behavior
  • Creating test scenarios for complex workflows
  • Security and edge case planning

Example: Analyze a Real API

1. Select the "🌐 api-planner" chatmode from GitHub Copilot Chat window and enter following prompt:

2. Analyze this API and create a comprehensive test plan: 
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json

3. Agent: I'll call the API planner once to analyze the OpenAPI schema at the provided URL and generate a comprehensive test plan with realistic sample data plus live validation of 5 endpoints; expected outcome: a saved test plan in ./api-test-reports/fakerest-test-plan.md and a validation summary. I'll run the planner now. 

4. When prompted to allow the tool call " Run api_plannerdemocratize-quality (MCP Server)". 

What you get:

  • Comprehensive markdown test plan with 40+ test scenarios
  • Realistic sample requests for each endpoint
  • Expected response structures
  • Error scenarios and edge cases
  • Ready for test generation

2. 🌐 @api-generator - The Builder

What it does: Converts test plans into executable code

Best for:

  • Generating Jest test suites
  • Creating Playwright automation scripts
  • Building Postman collections

Example: Generate Tests from Plan

1. Once you have the test plan generated as part of API Planner mode, now change the chatmode to : "🌐 api-generator", open the test plan MD file created in previous step (so that it can be in context) and enter following prompt
Create Playwright tests in TypeScript for the Books API section

2. Agent: I'm going to detect the project's test framework/language (required first step) so I can generate Playwright TypeScript tests in the correct layout; expected outcome: detection result with suggested framework/language. Then ask you to allow the another tool "Run api_project_setupdemocratize-quality (MCP Server)" for capturing the project info e.g. if you have an empty project then it will ask to select the framework/language otherwise if your project is playwright typescript it will create playwright test(s) accordingly.

3. After you answer, it invoke tool "Run api_generatordemocratize-quality (MCP Server)" which will actually generate your tests, click "Allow"

Created files:
✅ With Books CRUD operations
✅ Show on screen instructions for running the tests

Tests include:
- ✅ GET /api/v1/Books - List all books
- ✅ GET /api/v1/Books/{id} - Get book by ID
- ✅ POST /api/v1/Books - Create new book
- ✅ PUT /api/v1/Books/{id} - Update book
- ✅ DELETE /api/v1/Books/{id} - Delete book
- ✅ Error handling for invalid IDs
- ✅ Validation for required fields

Example: Generate Postman Collection

You: @🌐 api-generator create a Postman collection from the entire test plan

Agent: Creating Postman collection for Fake REST API...

Generated:
✅ postman/FakeRestAPI.postman_collection.json
✅ postman/environment.json (with base URL variables)

Sample Collection includes:
- 📁 Books (5 requests)
- 📁 Users (4 requests)
- 📁 Authors (4 requests)
- 📁 Activities (3 requests)
- 📁 CoverPhotos (4 requests)

Import into Postman and start testing!

3. 🌐 @api-healer - The Fixer

What it does: Automatically diagnoses and repairs failing tests

Best for:

  • Fixing broken tests after API changes
  • Updating endpoint URLs
  • Resolving authentication issues
  • Schema mismatch corrections

Example: Fix Failing Tests

You: @🌐 api-healer the Books tests are failing with 404 errors. 
Please analyze and fix them.

Agent: Analyzing test failures in tests/books.spec.ts...

Issues found:
❌ Endpoint changed: /api/Books → /api/v1/Books
❌ Response schema updated: added "pageCount" field
❌ ID format changed: number → string

Applying fixes:
✅ Updated all endpoint URLs to include /v1/ prefix
✅ Updated response assertions to include pageCount
✅ Fixed ID type in test data (42 → "42")
✅ Backed up original file to tests/books.spec.ts.backup

Re-running tests... ✅ All 15 tests now passing!

Example: Fix Schema Mismatches

You: @🌐 api-healer fix assertion errors in tests/users.spec.ts

Agent: Analyzing assertion failures...

Found 3 schema mismatches:
❌ Expected userName, API returns username (camelCase issue)
❌ Password field removed from response (security update)
❌ New field added: profileImageUrl

Healing actions:
✅ Updated assertions: userName → username
✅ Removed password field validation
✅ Added profileImageUrl to expected response
✅ Updated TypeScript interfaces

Tests healed successfully! ✅ 12/12 passing

💡 Usage Examples - E2E Testing Scenarios

Scenario 1: Testing the Fake REST API (Complete Workflow)

Real-world example using: https://fakerestapi.azurewebsites.net

Step 1: Plan your tests

@🌐 api-planner analyze the OpenAPI spec at 
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json 
and create a comprehensive test plan focusing on Books and Authors endpoints

What happens:

  • ✅ Agent analyzes the Fake REST API schema
  • ✅ Identifies 5 resource types (Books, Authors, Users, Activities, CoverPhotos)
  • ✅ Creates 40+ test scenarios covering:
    • Happy paths (GET all books, GET book by ID, CREATE book)
    • Error cases (404 for invalid IDs, 400 for bad data)
    • Edge cases (empty lists, ID boundaries)
    • Data validation (response schema checks)
  • ✅ Generates realistic sample data:
    {
      "id": 1,
      "title": "The Great Gatsby",
      "description": "A classic American novel",
      "pageCount": 180,
      "excerpt": "In my younger and more vulnerable years...",
      "publishDate": "1925-04-10T00:00:00Z"
    }

Step 2: Generate executable tests

@🌐 api-generator create Playwright tests in TypeScript from the test plan, 
focusing on the Books API section

What you get:

  • tests/books.spec.ts - Complete Books CRUD test suite
    import { test, expect } from '@playwright/test';
      
    test.describe('Books API', () => {
      test('GET /api/v1/Books - should return all books', async ({ request }) => {
        const response = await request.get('https://fakerestapi.azurewebsites.net/api/v1/Books');
        expect(response.ok()).toBeTruthy();
        expect(response.status()).toBe(200);
        const books = await response.json();
        expect(Array.isArray(books)).toBeTruthy();
      });
        
      test('POST /api/v1/Books - should create a new book', async ({ request }) => {
        const newBook = {
          id: 201,
          title: "Test Book",
          pageCount: 350
        };
        const response = await request.post('https://fakerestapi.azurewebsites.net/api/v1/Books', {
          data: newBook
        });
        expect(response.status()).toBe(200);
      });
    });

Step 3: Run and heal tests

# Run tests
npx playwright test tests/books.spec.ts

# If any tests fail due to API changes:
@🌐 api-healer fix the failing tests in tests/books.spec.ts

What healer fixes:

  • ✅ Endpoint URL updates (if API version changes)
  • ✅ Response schema corrections (new/removed fields)
  • ✅ Data type adjustments (string vs number IDs)
  • ✅ Status code updates

Scenario 2: Quick API Documentation with Postman

Goal: Generate a ready-to-use Postman collection for the Fake REST API

Step 1:
@🌐 api-planner create test plan from 
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json
with validation enabled to test actual endpoints

Step 2:
@🌐 api-generator create Postman collection from the test plan 
with all endpoints and example requests

Step 3: 
Import the generated collection into Postman and start testing!

What you get:

  • 📦 postman/FakeRestAPI.postman_collection.json - Complete collection
  • 🌍 postman/environment.json - Environment variables
  • ✅ 20+ pre-configured requests across all resource types
  • 📝 Example request bodies with realistic data
  • ✔️ Response validation tests included

Result: Professional Postman collection ready to share with your team


🔧 Calling Individual Tools

When to Use Individual Tools

Use agents (@api-planner, @api-generator, @api-healer) when:

  • You want AI assistance and recommendations
  • You're working on complex, multi-step workflows
  • You need explanations and best practices

Use individual tools when:

  • You need precise control over parameters
  • You're automating tests in CI/CD
  • You're integrating with other tools
  • You want to script repetitive tasks

Tool 1: api_planner

Purpose: Analyze API schemas and generate comprehensive test plans with realistic sample data

Basic usage:

// In Claude Desktop or MCP client
{
  "tool": "api_planner",
  "parameters": {
    "schemaUrl": "https://api.example.com/swagger.json",
    "schemaType": "openapi",
    "outputPath": "./test-plan.md"
  }
}

With endpoint validation:

{
  "tool": "api_planner",
  "parameters": {
    "schemaUrl": "https://petstore3.swagger.io/api/v3/openapi.json",
    "schemaType": "openapi",
    "outputPath": "./api-test-plan.md",
    "includeAuth": true,
    "includeSecurity": true,
    "includeErrorHandling": true,
    "testCategories": ["functional", "security", "edge-cases"],
    "validateEndpoints": true,
    "validationSampleSize": 3,
    "validationTimeout": 5000
  }
}

Parameters explained:

  • schemaUrl - URL to fetch API schema (OpenAPI/Swagger, GraphQL introspection endpoint)
  • schemaContent - Direct schema content as JSON/YAML string (alternative to schemaUrl)
  • schemaType - Type of schema: openapi, swagger, graphql, auto (default: auto)
  • apiBaseUrl - Base URL of the API (overrides schema baseUrl if provided)
  • includeAuth - Include authentication testing scenarios (default: true)
  • includeSecurity - Include security testing scenarios (default: true)
  • includeErrorHandling - Include error handling scenarios (default: true)
  • outputPath - File path to save test plan (default: ./api-test-plan.md)
  • testCategories - Types of tests: functional, security, performance, integration, edge-cases
  • validateEndpoints - Make actual API calls to validate endpoints (default: false)
  • validationSampleSize - Number of endpoints to validate, use -1 for all (default: 3)
  • validationTimeout - Timeout per validation request in ms (default: 5000)

Tool 2: api_generator

Purpose: Generate executable tests (Playwright, Jest, Postman) from test plans using AI

Generate Playwright tests:

{
  "tool": "api_generator",
  "parameters": {
    "testPlanPath": "./test-plan.md",
    "outputFormat": "playwright",
    "outputDir": "./tests",
    "includeSetup": true,
    "language": "typescript"
  }
}

Generate Jest tests:

{
  "tool": "api_generator",
  "parameters": {
    "testPlanPath": "./test-plan.md",
    "outputFormat": "jest",
    "outputDir": "./tests",
    "testFramework": "jest",
    "baseUrl": "https://api.example.com"
  }
}

Generate Postman collection:

{
  "tool": "api_generator",
  "parameters": {
    "testPlanPath": "./test-plan.md",
    "outputFormat": "postman",
    "outputDir": "./postman",
    "includeAuth": true
  }
}

Generate all formats:

{
  "tool": "api_generator",
  "parameters": {
    "testPlanPath": "./test-plan.md",
    "outputFormat": "all",
    "outputDir": "./tests",
    "language": "typescript"
  }
}

Parameters explained:

  • testPlanPath - Path to test plan markdown file
  • testPlanContent - Direct test plan content as markdown (alternative to testPlanPath)
  • outputFormat - Format: playwright, postman, jest, all (default: all)
  • outputDir - Directory to save generated tests (default: ./tests)
  • sessionId - Session ID for tracking generated tests
  • includeAuth - Include authentication setup (default: true)
  • includeSetup - Include test setup/teardown code (default: true)
  • testFramework - Framework: jest, mocha, playwright-test
  • baseUrl - Base URL for API (optional)
  • language - Language: javascript, typescript (default: typescript)

Tool 3: api_healer

Purpose: Debug and automatically fix failing API tests

Fix specific test file:

{
  "tool": "api_healer",
  "parameters": {
    "testPath": "./tests/auth.test.js",
    "testType": "auto",
    "autoFix": true,
    "backupOriginal": true
  }
}

Fix multiple test files:

{
  "tool": "api_healer",
  "parameters": {
    "testFiles": ["./tests/auth.test.js", "./tests/users.test.js"],
    "testType": "playwright",
    "healingStrategies": ["schema-update", "endpoint-fix", "auth-repair"]
  }
}

Analysis only (no fixes):

{
  "tool": "api_healer",
  "parameters": {
    "testPath": "./tests/api.test.js",
    "analysisOnly": true
  }
}

Parameters explained:

  • testPath - Path to a specific test file to heal
  • testFiles - Array of test file paths (alternative to testPath)
  • testType - Type: jest, playwright, postman, auto (default: auto)
  • sessionId - Session ID for tracking healing process
  • maxHealingAttempts - Max attempts per test (default: 3)
  • autoFix - Automatically apply fixes (default: true)
  • backupOriginal - Create backup files (default: true)
  • analysisOnly - Only analyze without fixing (default: false)
  • healingStrategies - Specific strategies: schema-update, endpoint-fix, auth-repair, data-correction, assertion-update

Tool 4: api_project_setup

Purpose: Detect project configuration for test generation

Detect project configuration:

{
  "tool": "api_project_setup",
  "parameters": {
    "outputDir": "./tests"
  }
}

What it does:

  • Scans for playwright.config.ts/js, jest.config.ts/js, tsconfig.json
  • Auto-detects framework (Playwright/Jest) and language (TypeScript/JavaScript)
  • Returns configuration or prompts user if ambiguous
  • Must be called BEFORE api_generator for optimal test generation

Response (when auto-detected):

{
  "success": true,
  "autoDetected": true,
  "config": {
    "framework": "playwright",
    "language": "typescript",
    "hasTypeScript": true,
    "hasPlaywrightConfig": true,
    "configFiles": ["playwright.config.ts", "tsconfig.json"]
  },
  "nextStep": "Call api_generator with outputFormat: 'playwright' and language: 'typescript'"
}

Response (when user input needed):

{
  "success": true,
  "needsUserInput": true,
  "prompts": [
    {
      "name": "framework",
      "question": "Which test framework would you like to use?",
      "choices": ["playwright", "jest", "postman", "all"]
    },
    {
      "name": "language",
      "question": "Which language would you like to use?",
      "choices": ["typescript", "javascript"]
    }
  ]
}

Parameters:

  • outputDir - Directory for tests (default: ./tests). Used to locate project root
  • promptUser - Force user prompts even if config detected (default: false)

Tool 5: api_request

Purpose: Execute individual HTTP requests with validation

Simple GET request:

{
  "tool": "api_request",
  "parameters": {
    "method": "GET",
    "url": "https://api.example.com/users/1",
    "expect": {
      "status": 200,
      "contentType": "application/json"
    }
  }
}

POST with authentication:

{
  "tool": "api_request",
  "parameters": {
    "method": "POST",
    "url": "https://api.example.com/users",
    "headers": {
      "Authorization": "Bearer your-token-here",
      "Content-Type": "application/json"
    },
    "data": {
      "name": "John Doe",
      "email": "[email protected]"
    },
    "expect": {
      "status": 201,
      "body": {
        "name": "John Doe",
        "email": "[email protected]"
      }
    }
  }
}

Request chaining (use response in next request):

{
  "tool": "api_request",
  "parameters": {
    "sessionId": "user-workflow",
    "chain": [
      {
        "name": "create_user",
        "method": "POST",
        "url": "https://api.example.com/users",
        "data": { "name": "Jane Doe" },
        "extract": { "userId": "id" }
      },
      {
        "name": "get_user",
        "method": "GET",
        "url": "https://api.example.com/users/{{ create_user.userId }}",
        "expect": { "status": 200 }
      }
    ]
  }
}

Parameters explained:

  • method - HTTP method (GET, POST, PUT, DELETE, PATCH, etc.)
  • url - Target endpoint
  • headers - Custom headers (authentication, content-type)
  • data - Request body for POST/PUT/PATCH
  • expect - Validation rules (status, headers, body)
  • sessionId - Group related requests
  • chain - Execute multiple requests in sequence

Tool 6: api_session_status

Purpose: Check status of API test sessions

Check session progress:

{
  "tool": "api_session_status",
  "parameters": {
    "sessionId": "user-workflow"
  }
}

Response example:

{
  "sessionId": "user-workflow",
  "totalRequests": 5,
  "successfulRequests": 4,
  "failedRequests": 1,
  "status": "completed",
  "startTime": "2025-10-23T10:00:00Z",
  "endTime": "2025-10-23T10:05:00Z",
  "duration": "5 minutes"
}

Tool 7: api_session_report

Purpose: Generate comprehensive HTML test reports

Generate report:

{
  "tool": "api_session_report",
  "parameters": {
    "sessionId": "user-workflow",
    "outputPath": "./reports/user-workflow-report.html",
    "includeCharts": true
  }
}

What you get:

  • 📊 Visual charts (success rate, response times)
  • 📋 Request/response details
  • ✅ Validation results (pass/fail with diffs)
  • 🕐 Timing information
  • 📸 Screenshots (if applicable)


⚙️ Configuration

Environment Variables

NODE_ENV=production              # Run in production mode (default)
OUTPUT_DIR=./api-test-reports    # Where to save reports (default)
MCP_FEATURES_ENABLEDEBUGMODE=true # Enable detailed logging

Command-Line Options

npx @democratize-quality/mcp-server [options]

Options:
  --help, -h        Show help information
  --version, -v     Display version number
  --agents          Install AI testing agents for GitHub Copilot
  --debug           Enable debug mode with detailed logs
  --verbose         Show detailed installation output
  --port <number>   Specify server port (if needed)

VS Code MCP Configuration

After running --agents, this file is created at .vscode/mcp.json:

{
  "servers": {
    "democratize-quality": {
      "type": "stdio",
      "command": "npx",
      "args": ["@democratize-quality/mcp-server"],
      "cwd": "${workspaceFolder}",
      "env": {
        "NODE_ENV": "production",
        "OUTPUT_DIR": "./api-test-reports"
      }
    }
  }
}

Output Directory Locations

  • VS Code/Local Projects: ./api-test-reports in your project
  • Claude Desktop: ~/.mcp-browser-control in your home directory
  • Custom Location: Set OUTPUT_DIR environment variable


� Troubleshooting

Agent Installation Issues

Solution:

  1. Restart VS Code completely
  2. Verify .vscode/mcp.json exists in your project root
  3. Check that GitHub Copilot extension is installed and active
  4. Reload the workspace: Cmd/Ctrl + Shift + P → "Developer: Reload Window"

Solution:

  1. Test the server: npx @democratize-quality/mcp-server --help
  2. Check .vscode/mcp.json configuration
  3. Try manual installation of chatmode files
  4. Ensure Node.js 14+ is installed

Solution:

  • Automatic backups are created: .vscode/mcp.json.backup.{timestamp}
  • The installer safely merges configurations
  • To restore: Copy backup file back to mcp.json

Solution:

  1. Check directory permissions for .github/ and .vscode/
  2. Run with elevated permissions if needed: sudo npx @democratize-quality/mcp-server --agents
  3. Ensure write access to project directory

Common Runtime Issues

Solution:

  • Check the configured OUTPUT_DIR environment variable
  • For Claude Desktop: Look in ~/.mcp-browser-control/
  • For VS Code: Look in ./api-test-reports/ in your project
  • Set custom location: OUTPUT_DIR=/your/custom/path

Solution:

  1. Test package availability: npx @democratize-quality/mcp-server --help
  2. Check Claude Desktop logs for detailed errors
  3. Verify configuration in claude_desktop_config.json
  4. Try running in terminal first to isolate issues

Solution:

  • Enhanced validation automatically shows "expected vs actual"
  • Check generated HTML reports for detailed comparisons
  • Enable debug mode: MCP_FEATURES_ENABLEDEBUGMODE=true
  • Review session logs with api_session_status tool

Getting Help

Enable debug mode for detailed logging:

# CLI
npx @democratize-quality/mcp-server --debug

# Environment variable
MCP_FEATURES_ENABLEDEBUGMODE=true node mcpServer.js

Log levels:

  • Production: Essential startup messages and errors only
  • Debug: Detailed request/response logs and validation details

� Additional Resources

Documentation

Quick Links

| Resource | Description | |----------|-------------| | GitHub Repository | Source code and issues | | Discussions | Community Q&A | | Issue Tracker | Bug reports and feature requests | | NPM Package | Package information |


🏗️ Architecture

Project Structure

democratize-quality-mcp-server/
├── src/
│   ├── tools/
│   │   ├── api/              # API testing tools
│   │   │   ├── api-planner.js
│   │   │   ├── api-generator.js
│   │   │   ├── api-healer.js
│   │   │   ├── api-project-setup.js
│   │   │   ├── api-request.js
│   │   │   ├── api-session-status.js
│   │   │   └── api-session-report.js
│   │   └── base/             # Tool framework
│   ├── chatmodes/            # AI agent definitions
│   │   ├── 🌐 api-planner.chatmode.md
│   │   ├── 🌐 api-generator.chatmode.md
│   │   └── 🌐 api-healer.chatmode.md
│   ├── config/               # Configuration management
│   └── utils/                # Utility functions
├── docs/                     # Documentation
├── mcpServer.js             # Main MCP server
└── cli.js                   # Command-line interface

Key Features

  • 🔍 Automatic Tool Discovery: Tools are automatically loaded and registered
  • ⚙️ Configuration System: Environment-based config with sensible defaults
  • 🛡️ Error Handling: Comprehensive validation and detailed error reporting
  • 📊 Session Management: Track and manage multi-step API test workflows

🔒 Security Considerations

Security Posture

  • ✅ API-Only Mode: Enabled by default for secure deployments
  • ✅ Standard HTTP Libraries: All requests use trusted Node.js libraries
  • ✅ No File System Access: API tools only write to configured output directory
  • ✅ No Browser Automation: No browser processes in API-only mode

Production Deployment Best Practices

{
  "mcpServers": {
    "democratize-quality": {
      "command": "npx",
      "args": ["@democratize-quality/mcp-server"],
      "env": {
        "NODE_ENV": "production",
        "OUTPUT_DIR": "~/api-test-reports"
      }
    }
  }
}

Security Recommendations

  1. 📁 Secure Output Directory: Set appropriate permissions on report directories
  2. 🔄 Regular Updates: Keep the package updated for security patches
  3. 🌍 Environment Separation: Use different configs for dev vs production
  4. 📊 Monitoring: Enable debug mode during initial deployment to monitor usage
  5. � API Keys: Never commit API keys or tokens to version control
  6. 🌐 Network Security: Use HTTPS endpoints for production API testing

👨‍💻 Development & Contributing

Adding New Tools

  1. Create tool file in src/tools/api/
  2. Extend ToolBase class
  3. Define tool schema and implementation
  4. Tools are automatically discovered!

Example tool structure:

const ToolBase = require('../base/ToolBase');

class MyApiTool extends ToolBase {
  static definition = {
    name: "my_api_tool",
    description: "Performs custom API testing operations",
    input_schema: {
      type: "object",
      properties: {
        endpoint: { type: "string", description: "API endpoint URL" }
      },
      required: ["endpoint"]
    }
  };

  async execute(parameters) {
    // Implementation
    return {
      success: true,
      data: { /* results */ }
    };
  }
}

module.exports = MyApiTool;

Running Tests

# Run test suite
npm test

# Run MCP inspector for development
npm run inspector

# Start server in debug mode
npm run dev

Contributing Guidelines

  1. 🍴 Fork the repository
  2. 🌿 Create a feature branch (git checkout -b feature/amazing-feature)
  3. ✅ Add tests for your changes
  4. 📝 Update documentation
  5. 🔍 Ensure tests pass (npm test)
  6. 📤 Submit a pull request

What to include in PRs:

  • Clear description of changes
  • Test coverage for new features
  • Updated documentation
  • Examples of usage (if applicable)

📄 License

ISC License - See LICENSE file for details


� License

This project is licensed under the ISC License - see the LICENSE file for details.

TL;DR: Free to use, modify, and distribute. No warranty. Use at your own risk.


�🙏 Acknowledgments

Built with the Model Context Protocol framework.

Special thanks to the MCP community and all contributors!


Ready to democratize quality through intelligent API testing! 🎯

Made with ❤️ by Raj Uppadhyay

⭐ Star on GitHub📦 View on NPM🐛 Report Bug💡 Request Feature