@democratize-quality/mcp-server
v1.1.6
Published
MCP Server for democratizing quality through browser automation and comprehensive API testing capabilities
Maintainers
Readme
🎯 Democratize Quality MCP Server
Intelligent API Testing Made Simple - A comprehensive Model Context Protocol (MCP) server that brings professional-grade API testing capabilities to everyone, from QA engineers to developers.
Transform your API testing workflow with AI-powered agents that plan, generate, and heal your tests automatically.
📦 Quick Installation
Option 1: Agent Mode Installation (Recommended)
The fastest way to get started - Install with intelligent testing agents for GitHub Copilot:
npx @democratize-quality/mcp-server@latest --agentsWhat this does:
- ✅ Installs the MCP server
- ✅ Sets up 3 AI-powered testing agents
- ✅ Configures VS Code integration automatically
- ✅ Creates project folders (
.github/chatmodes/,.vscode/)
🎥 Video Tutorial
▶️ Watch the complete walkthrough - Learn how to use the API Test Agents to build comprehensive test coverage for REST and GraphQL APIs.
One-Click Alternative:
Prerequisites:
- Node.js 14+ installed
- VS Code with GitHub Copilot extension (for agents)
- Active internet connection
Option 2: Manual Installation
For Claude Desktop or other MCP clients:
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"democratize-quality": {
"command": "npx",
"args": ["@democratize-quality/mcp-server"],
"env": {
"NODE_ENV": "production",
"OUTPUT_DIR": "./api-test-reports"
}
}
}
}Global installation option:
npm install -g @democratize-quality/mcp-server
# Then use anywhere
democratize-quality-mcp --help🛠️ Available Tools
This MCP server provides 7 powerful tools for comprehensive API testing:
🤖 AI-Powered Testing Tools
| Tool | Purpose | When to Use |
|------|---------|-------------|
| api_planner | Analyzes API schemas (OpenAPI/Swagger, GraphQL) and creates comprehensive test plans with realistic sample data | Starting a new API testing project, documenting API behavior, validating API endpoints |
| api_generator | Generates executable tests (Jest, Playwright, Postman) from test plans using AI-powered code generation | Converting test plans to runnable code in TypeScript or JavaScript |
| api_healer | Debugs and automatically fixes failing tests by analyzing errors and applying healing strategies | Tests break after API changes, schema updates, or authentication issues |
| api_project_setup | Detects project configuration (framework and language) for smart test generation | Before using api_generator - auto-detects Playwright/Jest and TypeScript/JavaScript |
🔧 Core API Testing Tools
| Tool | Purpose | When to Use |
|------|---------|-------------|
| api_request | Executes HTTP requests with validation and request chaining | Making individual API calls, testing specific endpoints, chaining multiple requests |
| api_session_status | Queries test session status and logs | Checking progress of test sequences, viewing request history |
| api_session_report | Generates comprehensive HTML test reports | Creating shareable test documentation with detailed analytics |
Key Capabilities:
- ✅ Smart Validation: Automatic "expected vs actual" comparison with detailed failure messages
- 🔗 Request Chaining: Extract data from one response and use in subsequent requests
- 📊 Session Tracking: Monitor multi-step API workflows across multiple requests
- 📈 Visual Reports: Interactive HTML reports with timing analysis and validation results
- 🎯 All HTTP Methods: GET, POST, PUT, DELETE, PATCH, OPTIONS, HEAD
- 🔒 Authentication: Support for Bearer tokens, API keys, custom headers
- 🎨 Realistic Sample Data: Auto-generates context-aware test data (names, emails, dates, etc.)
- 🔍 Optional Validation: Validate endpoints by making actual API calls with response time metrics
🌐 Chat Modes (AI Agents)
When you install with --agents, you get 3 intelligent testing assistants:
1. 🌐 @api-planner - The Strategist
What it does: Analyzes your API and creates comprehensive test strategies
Best for:
- Starting new testing projects
- Documenting API behavior
- Creating test scenarios for complex workflows
- Security and edge case planning
Example: Analyze a Real API
1. Select the "🌐 api-planner" chatmode from GitHub Copilot Chat window and enter following prompt:
2. Analyze this API and create a comprehensive test plan:
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json
3. Agent: I'll call the API planner once to analyze the OpenAPI schema at the provided URL and generate a comprehensive test plan with realistic sample data plus live validation of 5 endpoints; expected outcome: a saved test plan in ./api-test-reports/fakerest-test-plan.md and a validation summary. I'll run the planner now.
4. When prompted to allow the tool call " Run api_plannerdemocratize-quality (MCP Server)".
What you get:
- Comprehensive markdown test plan with 40+ test scenarios
- Realistic sample requests for each endpoint
- Expected response structures
- Error scenarios and edge cases
- Ready for test generation
2. 🌐 @api-generator - The Builder
What it does: Converts test plans into executable code
Best for:
- Generating Jest test suites
- Creating Playwright automation scripts
- Building Postman collections
Example: Generate Tests from Plan
1. Once you have the test plan generated as part of API Planner mode, now change the chatmode to : "🌐 api-generator", open the test plan MD file created in previous step (so that it can be in context) and enter following prompt
Create Playwright tests in TypeScript for the Books API section
2. Agent: I'm going to detect the project's test framework/language (required first step) so I can generate Playwright TypeScript tests in the correct layout; expected outcome: detection result with suggested framework/language. Then ask you to allow the another tool "Run api_project_setupdemocratize-quality (MCP Server)" for capturing the project info e.g. if you have an empty project then it will ask to select the framework/language otherwise if your project is playwright typescript it will create playwright test(s) accordingly.
3. After you answer, it invoke tool "Run api_generatordemocratize-quality (MCP Server)" which will actually generate your tests, click "Allow"
Created files:
✅ With Books CRUD operations
✅ Show on screen instructions for running the tests
Tests include:
- ✅ GET /api/v1/Books - List all books
- ✅ GET /api/v1/Books/{id} - Get book by ID
- ✅ POST /api/v1/Books - Create new book
- ✅ PUT /api/v1/Books/{id} - Update book
- ✅ DELETE /api/v1/Books/{id} - Delete book
- ✅ Error handling for invalid IDs
- ✅ Validation for required fields
Example: Generate Postman Collection
You: @🌐 api-generator create a Postman collection from the entire test plan
Agent: Creating Postman collection for Fake REST API...
Generated:
✅ postman/FakeRestAPI.postman_collection.json
✅ postman/environment.json (with base URL variables)
Sample Collection includes:
- 📁 Books (5 requests)
- 📁 Users (4 requests)
- 📁 Authors (4 requests)
- 📁 Activities (3 requests)
- 📁 CoverPhotos (4 requests)
Import into Postman and start testing!3. 🌐 @api-healer - The Fixer
What it does: Automatically diagnoses and repairs failing tests
Best for:
- Fixing broken tests after API changes
- Updating endpoint URLs
- Resolving authentication issues
- Schema mismatch corrections
Example: Fix Failing Tests
You: @🌐 api-healer the Books tests are failing with 404 errors.
Please analyze and fix them.
Agent: Analyzing test failures in tests/books.spec.ts...
Issues found:
❌ Endpoint changed: /api/Books → /api/v1/Books
❌ Response schema updated: added "pageCount" field
❌ ID format changed: number → string
Applying fixes:
✅ Updated all endpoint URLs to include /v1/ prefix
✅ Updated response assertions to include pageCount
✅ Fixed ID type in test data (42 → "42")
✅ Backed up original file to tests/books.spec.ts.backup
Re-running tests... ✅ All 15 tests now passing!Example: Fix Schema Mismatches
You: @🌐 api-healer fix assertion errors in tests/users.spec.ts
Agent: Analyzing assertion failures...
Found 3 schema mismatches:
❌ Expected userName, API returns username (camelCase issue)
❌ Password field removed from response (security update)
❌ New field added: profileImageUrl
Healing actions:
✅ Updated assertions: userName → username
✅ Removed password field validation
✅ Added profileImageUrl to expected response
✅ Updated TypeScript interfaces
Tests healed successfully! ✅ 12/12 passing💡 Usage Examples - E2E Testing Scenarios
Scenario 1: Testing the Fake REST API (Complete Workflow)
Real-world example using: https://fakerestapi.azurewebsites.net
Step 1: Plan your tests
@🌐 api-planner analyze the OpenAPI spec at
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json
and create a comprehensive test plan focusing on Books and Authors endpointsWhat happens:
- ✅ Agent analyzes the Fake REST API schema
- ✅ Identifies 5 resource types (Books, Authors, Users, Activities, CoverPhotos)
- ✅ Creates 40+ test scenarios covering:
- Happy paths (GET all books, GET book by ID, CREATE book)
- Error cases (404 for invalid IDs, 400 for bad data)
- Edge cases (empty lists, ID boundaries)
- Data validation (response schema checks)
- ✅ Generates realistic sample data:
{ "id": 1, "title": "The Great Gatsby", "description": "A classic American novel", "pageCount": 180, "excerpt": "In my younger and more vulnerable years...", "publishDate": "1925-04-10T00:00:00Z" }
Step 2: Generate executable tests
@🌐 api-generator create Playwright tests in TypeScript from the test plan,
focusing on the Books API sectionWhat you get:
tests/books.spec.ts- Complete Books CRUD test suiteimport { test, expect } from '@playwright/test'; test.describe('Books API', () => { test('GET /api/v1/Books - should return all books', async ({ request }) => { const response = await request.get('https://fakerestapi.azurewebsites.net/api/v1/Books'); expect(response.ok()).toBeTruthy(); expect(response.status()).toBe(200); const books = await response.json(); expect(Array.isArray(books)).toBeTruthy(); }); test('POST /api/v1/Books - should create a new book', async ({ request }) => { const newBook = { id: 201, title: "Test Book", pageCount: 350 }; const response = await request.post('https://fakerestapi.azurewebsites.net/api/v1/Books', { data: newBook }); expect(response.status()).toBe(200); }); });
Step 3: Run and heal tests
# Run tests
npx playwright test tests/books.spec.ts
# If any tests fail due to API changes:
@🌐 api-healer fix the failing tests in tests/books.spec.tsWhat healer fixes:
- ✅ Endpoint URL updates (if API version changes)
- ✅ Response schema corrections (new/removed fields)
- ✅ Data type adjustments (string vs number IDs)
- ✅ Status code updates
Scenario 2: Quick API Documentation with Postman
Goal: Generate a ready-to-use Postman collection for the Fake REST API
Step 1:
@🌐 api-planner create test plan from
https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json
with validation enabled to test actual endpoints
Step 2:
@🌐 api-generator create Postman collection from the test plan
with all endpoints and example requests
Step 3:
Import the generated collection into Postman and start testing!What you get:
- 📦
postman/FakeRestAPI.postman_collection.json- Complete collection - 🌍
postman/environment.json- Environment variables - ✅ 20+ pre-configured requests across all resource types
- 📝 Example request bodies with realistic data
- ✔️ Response validation tests included
Result: Professional Postman collection ready to share with your team
🔧 Calling Individual Tools
When to Use Individual Tools
Use agents (@api-planner, @api-generator, @api-healer) when:
- You want AI assistance and recommendations
- You're working on complex, multi-step workflows
- You need explanations and best practices
Use individual tools when:
- You need precise control over parameters
- You're automating tests in CI/CD
- You're integrating with other tools
- You want to script repetitive tasks
Tool 1: api_planner
Purpose: Analyze API schemas and generate comprehensive test plans with realistic sample data
Basic usage:
// In Claude Desktop or MCP client
{
"tool": "api_planner",
"parameters": {
"schemaUrl": "https://api.example.com/swagger.json",
"schemaType": "openapi",
"outputPath": "./test-plan.md"
}
}With endpoint validation:
{
"tool": "api_planner",
"parameters": {
"schemaUrl": "https://petstore3.swagger.io/api/v3/openapi.json",
"schemaType": "openapi",
"outputPath": "./api-test-plan.md",
"includeAuth": true,
"includeSecurity": true,
"includeErrorHandling": true,
"testCategories": ["functional", "security", "edge-cases"],
"validateEndpoints": true,
"validationSampleSize": 3,
"validationTimeout": 5000
}
}Parameters explained:
schemaUrl- URL to fetch API schema (OpenAPI/Swagger, GraphQL introspection endpoint)schemaContent- Direct schema content as JSON/YAML string (alternative to schemaUrl)schemaType- Type of schema:openapi,swagger,graphql,auto(default: auto)apiBaseUrl- Base URL of the API (overrides schema baseUrl if provided)includeAuth- Include authentication testing scenarios (default: true)includeSecurity- Include security testing scenarios (default: true)includeErrorHandling- Include error handling scenarios (default: true)outputPath- File path to save test plan (default: ./api-test-plan.md)testCategories- Types of tests:functional,security,performance,integration,edge-casesvalidateEndpoints- Make actual API calls to validate endpoints (default: false)validationSampleSize- Number of endpoints to validate, use -1 for all (default: 3)validationTimeout- Timeout per validation request in ms (default: 5000)
Tool 2: api_generator
Purpose: Generate executable tests (Playwright, Jest, Postman) from test plans using AI
Generate Playwright tests:
{
"tool": "api_generator",
"parameters": {
"testPlanPath": "./test-plan.md",
"outputFormat": "playwright",
"outputDir": "./tests",
"includeSetup": true,
"language": "typescript"
}
}Generate Jest tests:
{
"tool": "api_generator",
"parameters": {
"testPlanPath": "./test-plan.md",
"outputFormat": "jest",
"outputDir": "./tests",
"testFramework": "jest",
"baseUrl": "https://api.example.com"
}
}Generate Postman collection:
{
"tool": "api_generator",
"parameters": {
"testPlanPath": "./test-plan.md",
"outputFormat": "postman",
"outputDir": "./postman",
"includeAuth": true
}
}Generate all formats:
{
"tool": "api_generator",
"parameters": {
"testPlanPath": "./test-plan.md",
"outputFormat": "all",
"outputDir": "./tests",
"language": "typescript"
}
}Parameters explained:
testPlanPath- Path to test plan markdown filetestPlanContent- Direct test plan content as markdown (alternative to testPlanPath)outputFormat- Format:playwright,postman,jest,all(default: all)outputDir- Directory to save generated tests (default: ./tests)sessionId- Session ID for tracking generated testsincludeAuth- Include authentication setup (default: true)includeSetup- Include test setup/teardown code (default: true)testFramework- Framework:jest,mocha,playwright-testbaseUrl- Base URL for API (optional)language- Language:javascript,typescript(default: typescript)
Tool 3: api_healer
Purpose: Debug and automatically fix failing API tests
Fix specific test file:
{
"tool": "api_healer",
"parameters": {
"testPath": "./tests/auth.test.js",
"testType": "auto",
"autoFix": true,
"backupOriginal": true
}
}Fix multiple test files:
{
"tool": "api_healer",
"parameters": {
"testFiles": ["./tests/auth.test.js", "./tests/users.test.js"],
"testType": "playwright",
"healingStrategies": ["schema-update", "endpoint-fix", "auth-repair"]
}
}Analysis only (no fixes):
{
"tool": "api_healer",
"parameters": {
"testPath": "./tests/api.test.js",
"analysisOnly": true
}
}Parameters explained:
testPath- Path to a specific test file to healtestFiles- Array of test file paths (alternative to testPath)testType- Type:jest,playwright,postman,auto(default: auto)sessionId- Session ID for tracking healing processmaxHealingAttempts- Max attempts per test (default: 3)autoFix- Automatically apply fixes (default: true)backupOriginal- Create backup files (default: true)analysisOnly- Only analyze without fixing (default: false)healingStrategies- Specific strategies:schema-update,endpoint-fix,auth-repair,data-correction,assertion-update
Tool 4: api_project_setup
Purpose: Detect project configuration for test generation
Detect project configuration:
{
"tool": "api_project_setup",
"parameters": {
"outputDir": "./tests"
}
}What it does:
- Scans for
playwright.config.ts/js,jest.config.ts/js,tsconfig.json - Auto-detects framework (Playwright/Jest) and language (TypeScript/JavaScript)
- Returns configuration or prompts user if ambiguous
- Must be called BEFORE
api_generatorfor optimal test generation
Response (when auto-detected):
{
"success": true,
"autoDetected": true,
"config": {
"framework": "playwright",
"language": "typescript",
"hasTypeScript": true,
"hasPlaywrightConfig": true,
"configFiles": ["playwright.config.ts", "tsconfig.json"]
},
"nextStep": "Call api_generator with outputFormat: 'playwright' and language: 'typescript'"
}Response (when user input needed):
{
"success": true,
"needsUserInput": true,
"prompts": [
{
"name": "framework",
"question": "Which test framework would you like to use?",
"choices": ["playwright", "jest", "postman", "all"]
},
{
"name": "language",
"question": "Which language would you like to use?",
"choices": ["typescript", "javascript"]
}
]
}Parameters:
outputDir- Directory for tests (default:./tests). Used to locate project rootpromptUser- Force user prompts even if config detected (default: false)
Tool 5: api_request
Purpose: Execute individual HTTP requests with validation
Simple GET request:
{
"tool": "api_request",
"parameters": {
"method": "GET",
"url": "https://api.example.com/users/1",
"expect": {
"status": 200,
"contentType": "application/json"
}
}
}POST with authentication:
{
"tool": "api_request",
"parameters": {
"method": "POST",
"url": "https://api.example.com/users",
"headers": {
"Authorization": "Bearer your-token-here",
"Content-Type": "application/json"
},
"data": {
"name": "John Doe",
"email": "[email protected]"
},
"expect": {
"status": 201,
"body": {
"name": "John Doe",
"email": "[email protected]"
}
}
}
}Request chaining (use response in next request):
{
"tool": "api_request",
"parameters": {
"sessionId": "user-workflow",
"chain": [
{
"name": "create_user",
"method": "POST",
"url": "https://api.example.com/users",
"data": { "name": "Jane Doe" },
"extract": { "userId": "id" }
},
{
"name": "get_user",
"method": "GET",
"url": "https://api.example.com/users/{{ create_user.userId }}",
"expect": { "status": 200 }
}
]
}
}Parameters explained:
method- HTTP method (GET, POST, PUT, DELETE, PATCH, etc.)url- Target endpointheaders- Custom headers (authentication, content-type)data- Request body for POST/PUT/PATCHexpect- Validation rules (status, headers, body)sessionId- Group related requestschain- Execute multiple requests in sequence
Tool 6: api_session_status
Purpose: Check status of API test sessions
Check session progress:
{
"tool": "api_session_status",
"parameters": {
"sessionId": "user-workflow"
}
}Response example:
{
"sessionId": "user-workflow",
"totalRequests": 5,
"successfulRequests": 4,
"failedRequests": 1,
"status": "completed",
"startTime": "2025-10-23T10:00:00Z",
"endTime": "2025-10-23T10:05:00Z",
"duration": "5 minutes"
}Tool 7: api_session_report
Purpose: Generate comprehensive HTML test reports
Generate report:
{
"tool": "api_session_report",
"parameters": {
"sessionId": "user-workflow",
"outputPath": "./reports/user-workflow-report.html",
"includeCharts": true
}
}What you get:
- 📊 Visual charts (success rate, response times)
- 📋 Request/response details
- ✅ Validation results (pass/fail with diffs)
- 🕐 Timing information
- 📸 Screenshots (if applicable)
⚙️ Configuration
Environment Variables
NODE_ENV=production # Run in production mode (default)
OUTPUT_DIR=./api-test-reports # Where to save reports (default)
MCP_FEATURES_ENABLEDEBUGMODE=true # Enable detailed loggingCommand-Line Options
npx @democratize-quality/mcp-server [options]
Options:
--help, -h Show help information
--version, -v Display version number
--agents Install AI testing agents for GitHub Copilot
--debug Enable debug mode with detailed logs
--verbose Show detailed installation output
--port <number> Specify server port (if needed)VS Code MCP Configuration
After running --agents, this file is created at .vscode/mcp.json:
{
"servers": {
"democratize-quality": {
"type": "stdio",
"command": "npx",
"args": ["@democratize-quality/mcp-server"],
"cwd": "${workspaceFolder}",
"env": {
"NODE_ENV": "production",
"OUTPUT_DIR": "./api-test-reports"
}
}
}
}Output Directory Locations
- VS Code/Local Projects:
./api-test-reportsin your project - Claude Desktop:
~/.mcp-browser-controlin your home directory - Custom Location: Set
OUTPUT_DIRenvironment variable
� Troubleshooting
Agent Installation Issues
Solution:
- Restart VS Code completely
- Verify
.vscode/mcp.jsonexists in your project root - Check that GitHub Copilot extension is installed and active
- Reload the workspace:
Cmd/Ctrl + Shift + P→ "Developer: Reload Window"
Solution:
- Test the server:
npx @democratize-quality/mcp-server --help - Check
.vscode/mcp.jsonconfiguration - Try manual installation of chatmode files
- Ensure Node.js 14+ is installed
Solution:
- Automatic backups are created:
.vscode/mcp.json.backup.{timestamp} - The installer safely merges configurations
- To restore: Copy backup file back to
mcp.json
Solution:
- Check directory permissions for
.github/and.vscode/ - Run with elevated permissions if needed:
sudo npx @democratize-quality/mcp-server --agents - Ensure write access to project directory
Common Runtime Issues
Solution:
- Check the configured
OUTPUT_DIRenvironment variable - For Claude Desktop: Look in
~/.mcp-browser-control/ - For VS Code: Look in
./api-test-reports/in your project - Set custom location:
OUTPUT_DIR=/your/custom/path
Solution:
- Test package availability:
npx @democratize-quality/mcp-server --help - Check Claude Desktop logs for detailed errors
- Verify configuration in
claude_desktop_config.json - Try running in terminal first to isolate issues
Solution:
- Enhanced validation automatically shows "expected vs actual"
- Check generated HTML reports for detailed comparisons
- Enable debug mode:
MCP_FEATURES_ENABLEDEBUGMODE=true - Review session logs with
api_session_statustool
Getting Help
Enable debug mode for detailed logging:
# CLI
npx @democratize-quality/mcp-server --debug
# Environment variable
MCP_FEATURES_ENABLEDEBUGMODE=true node mcpServer.jsLog levels:
- Production: Essential startup messages and errors only
- Debug: Detailed request/response logs and validation details
� Additional Resources
Documentation
- 📖 Getting Started Guide - Complete setup walkthrough
- 🔧 Tool Reference - Detailed tool documentation
- 🎯 API Tools Usage Guide - Advanced examples and patterns
- 🔷 GraphQL Support Guide - GraphQL testing capabilities and features
- �💻 Developer Guide - Extend the server
- ⚙️ Configuration Guide - Advanced settings
- � Examples - Real-world usage examples
Quick Links
| Resource | Description | |----------|-------------| | GitHub Repository | Source code and issues | | Discussions | Community Q&A | | Issue Tracker | Bug reports and feature requests | | NPM Package | Package information |
🏗️ Architecture
Project Structure
democratize-quality-mcp-server/
├── src/
│ ├── tools/
│ │ ├── api/ # API testing tools
│ │ │ ├── api-planner.js
│ │ │ ├── api-generator.js
│ │ │ ├── api-healer.js
│ │ │ ├── api-project-setup.js
│ │ │ ├── api-request.js
│ │ │ ├── api-session-status.js
│ │ │ └── api-session-report.js
│ │ └── base/ # Tool framework
│ ├── chatmodes/ # AI agent definitions
│ │ ├── 🌐 api-planner.chatmode.md
│ │ ├── 🌐 api-generator.chatmode.md
│ │ └── 🌐 api-healer.chatmode.md
│ ├── config/ # Configuration management
│ └── utils/ # Utility functions
├── docs/ # Documentation
├── mcpServer.js # Main MCP server
└── cli.js # Command-line interfaceKey Features
- 🔍 Automatic Tool Discovery: Tools are automatically loaded and registered
- ⚙️ Configuration System: Environment-based config with sensible defaults
- 🛡️ Error Handling: Comprehensive validation and detailed error reporting
- 📊 Session Management: Track and manage multi-step API test workflows
🔒 Security Considerations
Security Posture
- ✅ API-Only Mode: Enabled by default for secure deployments
- ✅ Standard HTTP Libraries: All requests use trusted Node.js libraries
- ✅ No File System Access: API tools only write to configured output directory
- ✅ No Browser Automation: No browser processes in API-only mode
Production Deployment Best Practices
{
"mcpServers": {
"democratize-quality": {
"command": "npx",
"args": ["@democratize-quality/mcp-server"],
"env": {
"NODE_ENV": "production",
"OUTPUT_DIR": "~/api-test-reports"
}
}
}
}Security Recommendations
- 📁 Secure Output Directory: Set appropriate permissions on report directories
- 🔄 Regular Updates: Keep the package updated for security patches
- 🌍 Environment Separation: Use different configs for dev vs production
- 📊 Monitoring: Enable debug mode during initial deployment to monitor usage
- � API Keys: Never commit API keys or tokens to version control
- 🌐 Network Security: Use HTTPS endpoints for production API testing
👨💻 Development & Contributing
Adding New Tools
- Create tool file in
src/tools/api/ - Extend
ToolBaseclass - Define tool schema and implementation
- Tools are automatically discovered!
Example tool structure:
const ToolBase = require('../base/ToolBase');
class MyApiTool extends ToolBase {
static definition = {
name: "my_api_tool",
description: "Performs custom API testing operations",
input_schema: {
type: "object",
properties: {
endpoint: { type: "string", description: "API endpoint URL" }
},
required: ["endpoint"]
}
};
async execute(parameters) {
// Implementation
return {
success: true,
data: { /* results */ }
};
}
}
module.exports = MyApiTool;Running Tests
# Run test suite
npm test
# Run MCP inspector for development
npm run inspector
# Start server in debug mode
npm run devContributing Guidelines
- 🍴 Fork the repository
- 🌿 Create a feature branch (
git checkout -b feature/amazing-feature) - ✅ Add tests for your changes
- 📝 Update documentation
- 🔍 Ensure tests pass (
npm test) - 📤 Submit a pull request
What to include in PRs:
- Clear description of changes
- Test coverage for new features
- Updated documentation
- Examples of usage (if applicable)
📄 License
ISC License - See LICENSE file for details
� License
This project is licensed under the ISC License - see the LICENSE file for details.
TL;DR: Free to use, modify, and distribute. No warranty. Use at your own risk.
�🙏 Acknowledgments
Built with the Model Context Protocol framework.
Special thanks to the MCP community and all contributors!
Ready to democratize quality through intelligent API testing! 🎯
Made with ❤️ by Raj Uppadhyay
⭐ Star on GitHub • 📦 View on NPM • 🐛 Report Bug • 💡 Request Feature

