playwright-mcp-yaml-test
v1.2.1
Published
YAML-based Playwright MCP testing framework with data-driven CSV support for Gemini CLI
Maintainers
Readme
Playwright MCP YAML (Democratize Quality)
Overview
This Node.js test automation framework provides a YAML-based approach to defining, organizing, and executing both UI and API tests using multiple specialized MCP servers. The framework supports modular test design through reusable step libraries, comprehensive test case definitions, and organized test suites with AI-powered test execution.
🔧 MCP Server Architecture: Due to the recent changes in Playwright MCP (where the MCP code has been merged into the Playwright monorepo), this release now uses a specialized multi-server approach for easier maintenance:
- Official Playwright MCP - For UI/browser testing
- Democratize-Quality MCP - For API testing and validation
- Artillery MCP - For performance testing
🚀 Key Features
- 🎯 YAML-based test definitions - Easy to write and maintain for both UI and API
- 🌐 Dual Testing Support - UI browser automation + API endpoint testing
- 🤖 AI-powered test execution - Leverages multiple specialized MCP servers with Gemini CLI
- 📊 Enhanced HTML reports - Comprehensive test results with screenshots and API reports
- 🔧 Multi-environment support - Flexible environment configurations
- 📸 Automatic artifacts - Screenshots, traces, and API session reports
- 🧪 Multi-language test generation - Generate tests in TypeScript, JavaScript, Python, C#, Java
- 🏗️ CI/CD ready - GitHub Actions workflow included
- 🐙 GitHub integration - Automated PR comments and Pages deployment
- 🆕 API Testing Support - Full REST API testing with request/response validation
- 🆕 Enhanced UI Reports - Rich visual reports with screenshot galleries
- 🆕 Multi-language Code Generation - Auto-generate Playwright tests in 5+ languages
- 🆕 Session Management - Maintain API session context across requests
- 🆕 Advanced Validation - JSON schema validation, status codes, headers
✨ NEW Features
- 🆕 Data-Driven Testing - Execute tests with CSV data sources for parameterized testing
- 🔧 Specialized MCP Architecture - Updated to use official Playwright MCP for UI, democratize-quality MCP for API, and artillery-mcp for performance testing
🆕 Data-Driven Testing Capabilities
The framework now supports Data-Driven Testing with CSV data sources, allowing you to execute the same test logic with multiple sets of test data. This powerful feature enables parameterized testing, reducing test maintenance while increasing test coverage.
Key Benefits
- 📊 CSV/JSON Support - Use external data files to drive test execution
- 🔄 Iterative Execution - Run tests multiple times with different data sets
- 🎯 Variable Mapping - Map data columns to test variables seamlessly
- 📈 Enhanced Reports - Detailed reports showing results for each data iteration
- 🔧 Backward Compatible - Existing tests continue to work without changes
- 🏷️ Flexible Configuration - Control iterations, failure handling, and data mapping
Quick Example: Data-Driven Login Test
CSV Data File (data/login-users.csv):
username,password,expected_result,test_scenario
standard_user,secret_sauce,success,Valid login with standard user
locked_out_user,secret_sauce,locked,Locked user login attempt
invalid_user,wrong_password,failed,Invalid credentials testData-Driven Test Case (test-cases/data-driven-login.yml):
name: "Data-Driven Login Test"
description: "Test login functionality with multiple user credentials from CSV"
tags:
- data-driven
- login
- regression
type: "ui"
environment_variables:
- BASE_URL
data_source:
type: "csv"
file: "data/login-users.csv"
column_mapping:
username: "TEST_USERNAME"
password: "TEST_PASSWORD"
expected_result: "EXPECTED_RESULT"
iterations:
max_iterations: 3
stop_on_failure: false
steps:
- include: "navigation"
- "Fill username field with {{TEST_USERNAME}}"
- "Fill password field with {{TEST_PASSWORD}}"
- "Click login submit button"
- "Verify login result matches {{EXPECTED_RESULT}}"
- include: "cleanup"Execution Results:
🚀 Running Data-Driven Test: Data-Driven Login Test
📊 Data source: data/login-users.csv (3 rows)
✓ Iteration 1/3: Valid login with standard user - PASSED
✓ Iteration 2/3: Locked user login attempt - PASSED
✓ Iteration 3/3: Invalid credentials test - PASSED
✅ Data-driven test completed: 3/3 iterations passedAdvanced Data-Driven Features
JSON Data Support
data_source:
type: "json"
file: "data/api-test-data.json"
column_mapping:
user_id: "API_USER_ID"
endpoint: "API_ENDPOINT"Iteration Control
data_source:
iterations:
max_iterations: 10 # Limit iterations
stop_on_failure: true # Stop on first failureVariable Precedence
Data variables automatically override environment variables during test execution:
- CSV Data (highest priority)
- Environment Variables
- Default Values (lowest priority)
Directory Structure for Data-Driven Tests
project-root/
├── data/ # Data files directory
│ ├── login-users.csv # Login test data
│ └── product-catalog.csv # E-commerce test data
├── test-cases/
├── data-driven-login.yml # Data-driven login test
└── data-driven-api-tests.yml # Data-driven API testEnhanced Reporting
Data-driven tests generate comprehensive reports showing:
- Iteration Summary: Pass/fail status for each data row
- Data Context: Which data was used for each iteration
- Failure Analysis: Detailed error information per iteration
- Performance Metrics: Execution time per iteration
- Data Validation: CSV parsing and mapping validation results
CLI Support for Data-Driven Testing
# Run data-driven test case
playwright-mcp-yaml-tester --test-case test-cases/data-driven-login.yml
# Run with iteration limits
playwright-mcp-yaml-tester --test-case test-cases/data-driven-login.yml
# Run with failure handling
playwright-mcp-yaml-tester --test-case test-cases/data-driven-login.yml📖 Detailed Documentation: For complete data-driven testing setup, advanced configuration options, troubleshooting, and best practices, please refer to the Data-Driven Testing Documentation.
🆕 Performance Testing Capabilities
The framework now supports basic performance testing using Artillery MCP integration. You can define performance test scenarios in YAML, simulate multiple users, and generate enhanced reports with actionable recommendations.
Example: Performance Test YAML
name: "JSONPlaceholder API Performance Test"
description: "Test how well the JSONPlaceholder website handles multiple users at the same time"
type: "performance"
tags:
- performance
- api
- simple
environment_variables:
- PERF_TEST_URL
- TEST_DURATION
- USERS_PER_SECOND
steps:
- "Create Artillery Performance Test Scenario for the website at {{PERF_TEST_URL}}"
- "Include GET /posts endpoint (retrive posts) in the scenario"
- "Include POST /posts endpoint (create a new post) in the scenario"
- "Include GET /posts/1 endpoint (retrieve a specific post) in the scenario"
- "Include PUT /posts/1 endpoint (update a specific post) in the scenario"
- "Set Test Duration as {{TEST_DURATION}} seconds"
- "Simulate {{USERS_PER_SECOND}} new users every second"
- "Run the Artillery tests with enhanced reporting"
- "Analyze the results and provide stakeholder-friendly recommendations"Architecture
🔄 Updated MCP Server Architecture: With the latest Playwright MCP changes (where MCP code has been merged into the Playwright monorepo), this release adopts a specialized multi-server approach for better maintenance and separation of concerns:
MCP Server Distribution:
- Official Playwright MCP (
@playwright/mcp) - Handles all UI/browser testing capabilities - Democratize-Quality MCP (
@democratize-quality/mcp-server) - Handles all API testing and validation - Artillery MCP (
@democratize-quality/artillery-performance-mcp-server) - Handles performance testing
Framework Components:
The framework consists of three main components:
- Step Libraries - Reusable test steps that can be shared across multiple test cases
- Test Cases - Individual test scenarios that combine steps to perform specific validations
- Test Suites - Collections of related test cases grouped for execution
This architecture ensures optimal performance, easier maintenance, and clear separation of testing responsibilities.
Pre-Requisites
Before using this test automation framework, ensure you have the following components installed and configured:
1. System Requirements
- Node.js: Version 14 or higher
- npm: Version 6 or higher
- Operating System: Windows, macOS, or Linux
2. Install the playwright-mcp-yaml-test NPM packge
npm install -g playwright-mcp-yaml-test3. ⚠️ IMPORTANT: Update Your Gemini Configuration
With this release, you MUST update your .gemini/settings.json file to use the new MCP server configuration. The previous single-server setup is no longer supported.
🔧 Why this change? Due to recent changes in Playwright MCP (where MCP code has been merged into the Playwright monorepo), managing API support became challenging. This release separates concerns by using:
- Official Playwright MCP for UI testing
- Democratize-Quality MCP server for API testing
- Artillery MCP for performance testing
👉 Please update your .gemini/settings.json file as shown in the configuration section below - this is required for the framework to work properly.
4. Install Gemini CLI
# Install Gemini CLI globally
npm install -g @google/gemini-cli
# Or install using your preferred package manager
# yarn global add @google/gemini-cli
# pnpm add -g @google/gemini-cliRun "gemini" command on your terminal or command prompt to follow the instructions to setup and authenticate Gemini CLI. For more details refer official Google Gemini CLI page.
5. Update Gemini Model (by default it uses:gemini-2.5-pro)
export GEMINI_MODEL=gemini-2.5-flashDirectory Structure
Create a folder in your workspace something similar to as given below:
project-root/
├── .gemini/ # Gemini Settings
│ └── settings.json
├── data/ # 🆕 Data files for data-driven testing
│ ├── login-users.csv # Login test data
│ ├── api-test-data.json # API test scenarios
│ └── product-catalog.csv # E-commerce test data
├── steps/ # Step library files
│ ├── login.yml # UI login steps
│ ├── navigation.yml # UI navigation steps
│ ├── cleanup.yml # UI cleanup steps
│ └── api-authentication.yml # API authentication
├── test-cases/ # Individual test case files
│ ├── user-login.yml # UI login test
│ ├── data-driven-login.yml # 🆕 Data-driven login test
│ ├── add-product-to-cart.yml # UI e-commerce test
│ └── api-authentication-flow.yml # API auth & error handling
├── test-suites/ # Test suite collections
├── smoke-tests.yml # UI smoke tests
└── api-smoke-tests.yml # API smoke tests
.gemini/settings.json
🔧 Important Update: With the latest Playwright MCP changes (MCP code merged into Playwright monorepo), this release now uses a specialized multi-server approach for easier maintenance and better separation of concerns.
For configuring the MCP servers in Gemini CLI, add the following contents to the settings.json file:
{
"theme": "GitHub",
"selectedAuthType": "oauth-personal", //use other based on how you autheticated gemini
"mcpServers": {
"democratize-api-mcp": {
"command": "npx",
"args": [
"@democratize-quality/mcp-server",
"--api-only"
],
"env": {
"NODE_ENV": "production",
"OUTPUT_DIR": "./reports"
}
},
"playwright-mcp": {
"command": "npx",
"args": [
"@playwright/mcp@latest",
"--user-data-dir",
"gemini-playwright",
"--save-trace",
"--output-dir",
"test-artifacts"
]
},
"artillery-mcp": {
"command": "npx",
"args": [
"@democratize-quality/artillery-performance-mcp-server",
"/Users/rajuppadhyay/yaml-tests"
]
}
},
"autoAccept": true //This one is for allowing gemini to accept all actions if nor required make it false
}📋 MCP Server Responsibilities:
- playwright-mcp: Handles all UI/browser interactions (navigate, click, fill, screenshots, etc.)
- democratize-api-mcp: Handles all API testing (requests, responses, validation, session management)
- artillery-mcp: Handles all performance testing (load testing, stress testing, metrics analysis)
With these settings, when you run tests:
- UI tests will use the official Playwright MCP for all browser interactions
- API tests will use the democratize-api-mcp for all API operations
- Performance tests will use artillery-mcp for load testing scenarios
- All artifacts (traces, screenshots, reports) will be saved in the "test-artifacts" folder
🔒 Security Note: Make sure to gitignore the .gemini/settings.json file for best security practices.
Step Libraries
Step libraries contain reusable test steps that can be included in multiple test cases. They promote code reusability and maintainability.
Step Library Syntax
# Step library template
description: "Brief description of the step library"
parameters: # Optional: Environment variables used
- VARIABLE_NAME
- ANOTHER_VARIABLE
steps:
- "Step description with {{VARIABLE_NAME}} interpolation"
- "Another step action"
- "Final step in sequence"Creating Step Libraries
- Create a new
.ymlfile in thesteps/directory - Define the library structure with description, parameters, and steps
- Use variable interpolation with
{{VARIABLE_NAME}}syntax for dynamic values
Step Library Examples
Create following files in your steps directory:
Navigation Steps (steps/navigation.yml)
description: "Common navigation actions"
parameters:
- BASE_URL
steps:
- "Navigate to {{BASE_URL}}"
- "Wait for page to load completely"
- "Verify page title contains expected text"Login Steps (steps/login.yml)
description: "User authentication with session management"
parameters:
- BASE_URL
- TEST_USERNAME
- TEST_PASSWORD
steps:
- "Navigate to {{BASE_URL}}"
- "Check if user is already logged in by looking for user menu or dashboard"
- "If not logged in, click login button or link"
- "Fill username field with {{TEST_USERNAME}}"
- "Fill password field with {{TEST_PASSWORD}}"
- "Click login submit button"
- "Wait for login success indicator"
- "Verify user is logged in successfully"Cleanup Steps (steps/cleanup.yml)
description: "Test cleanup actions"
steps:
- "Clear browser cache if needed"
- "Reset test data if required"
- "Log out user if logged in"
- "Close any open dialogs or modals"🆕 API Step Library Examples
The framework now supports comprehensive API testing with the following step libraries:
API User Management (steps/api-user-management.yml)
description: "API operations for user management - GET, POST, PUT, DELETE users"
parameters:
- API_BASE_URL
steps:
- "Add header 'x-api-key: reqres-free-v1' to all requests in the session"
- "Make GET request to {{API_BASE_URL}}/api/users?page=2 to fetch users list"
- "Verify response status code is 200"
- "Verify response contains 'data' array with user objects"
- "Verify response contains 'page', 'per_page', 'total' pagination fields"API User Creation (steps/api-user-creation.yml)
description: "API operations for creating and managing new users"
parameters:
- API_BASE_URL
steps:
- "Add header 'x-api-key: reqres-free-v1' to all requests in the session"
- "Make POST request to {{API_BASE_URL}}/api/users with JSON body: {\"name\": \"morpheus\", \"job\": \"leader\"}"
- "Verify response status code is 201"
- "Verify response contains 'id' field"
- "Verify response contains 'createdAt' timestamp"
- "Verify response 'name' field equals 'morpheus'"
- "Verify response 'job' field equals 'leader'"API Authentication (steps/api-authentication.yml)
description: "API operations for user authentication - login and registration"
parameters:
- API_BASE_URL
steps:
- "Add header 'x-api-key: reqres-free-v1' to all requests in the session"
- "Make POST request to {{API_BASE_URL}}/api/register with JSON body: {\"email\": \"[email protected]\", \"password\": \"pistol\"}"
- "Verify response status code is 200"
- "Verify response contains 'id' field"
- "Verify response contains 'token' field"
- "Store token from response for subsequent requests"Test Cases
Test cases define individual test scenarios by combining step libraries and custom steps.
Test Case Syntax
name: "Test Case Name"
description: "Detailed description of what this test validates"
tags: # Optional: Tags for categorization and filtering
- tag1
- tag2
environment_variables: # Optional: Required environment variables
- VARIABLE_NAME
- ANOTHER_VARIABLE
steps:
- include: "step-library-name" # Include entire step library
- "Custom step description" # Individual step
- include: "another-library" # Include another libraryCreating Test Cases
- Create a new
.ymlfile in thetest-cases/directory - Define required fields:
nameandsteps - Add optional fields:
description,tags,environment_variables - Combine step libraries and custom steps in the
stepsarray
Test Case Example
Create following files in your test-cases directory:
User Login Test (test-cases/user-login.yml)
name: "User Login Test"
description: "Test user authentication functionality"
tags:
- smoke
- login
- authentication
- critical
environment_variables:
- BASE_URL
- TEST_USERNAME
- TEST_PASSWORD
steps:
- include: "navigation"
- include: "login"
- "Verify user dashboard is displayed"
- "Verify user name appears in header"
- include: "cleanup"🆕 API Test Case Examples
The framework now supports comprehensive API testing. Here are examples of API test cases:
API User Management Test (test-cases/api-user-management.yml)
name: "API User Management Test"
description: "Test GET users endpoint with pagination and user creation via POST"
tags:
- api
- user-management
- reqres
- critical
type: "api"
environment_variables:
- API_BASE_URL
steps:
- include: "api-user-management"
- include: "api-user-creation"
- "Make GET request to {{API_BASE_URL}}/api/users/2 to fetch single user"
- "Verify response status code is 200"
- "Verify response contains user data with id=2"
- "Verify response contains 'support' object with url and text fields"API Authentication Flow Test (test-cases/api-authentication-flow.yml)
name: "API Authentication Flow Test"
description: "Test user registration, login, and error scenarios with reqres.in API"
tags:
- api
- authentication
- error-handling
- reqres
type: "api"
environment_variables:
- API_BASE_URL
steps:
- include: "api-authentication"
- "Make POST request to {{API_BASE_URL}}/api/login with JSON body: {\"email\": \"[email protected]\", \"password\": \"cityslicka\"}"
- "Verify response status code is 200"
- "Verify response contains 'token' field"
- "Make POST request to {{API_BASE_URL}}/api/login with JSON body: {\"email\": \"peter@klaven\"}"
- "Verify response status code is 400"
- "Verify response contains error message about missing password"
- "Make GET request to {{API_BASE_URL}}/api/users/23 to test non-existent user"
- "Verify response status code is 404"Add Product to Cart Test (test-cases/add-product-to-cart.yml)
name: "Add Product to Cart Test"
description: "Test adding a product to the shopping cart"
tags:
- smoke
- cart
- add-to-cart
- critical
environment_variables:
- BASE_URL
- TEST_USERNAME
- TEST_PASSWORD
steps:
- include: "navigation"
- include: "login"
- "Verify user dashboard is displayed"
- "Select a product from the catalog"
- "Add the selected product to the cart"
- "Verify product is added to the cart"
- "Verify cart count is updated"
- "Verify cart details are correct"
- include: "cleanup"Test Suites
Test suites group related test cases for organized execution.
Test Suite Syntax
name: "Test Suite Name"
description: "Description of the test suite purpose"
tags: # Optional: Suite-level tags
- suite-tag1
- suite-tag2
environment: "dev" # Optional: Default environment
test-cases:
- "test-cases/test-case-1.yml"
- "test-cases/test-case-2.yml"Creating Test Suites
- Create a new
.ymlfile in thetest-suites/directory - Define required fields:
nameandtest-cases - List test case file paths in the
test-casesarray - Add optional metadata:
description,tags,environment
Test Suite Example
Create following files in your test-suites directory:
Smoke Test Suite (test-suites/smoke-tests.yml)
name: "Smoke Test Suite"
description: "Quick smoke tests for critical functionality"
tags:
- smoke
- critical
- fast
environment: "dev"
test-cases:
- "test-cases/user-login.yml"
- "test-cases/add-product-to-cart.yml"Environment Variables
The framework supports environment-specific configuration through .env files.
Environment File Structure
Create environment files for your environments as given below and let's use a Sample Website SauceLab for runing the tests. Please update the details in env file as per details given on the website.
# .env.dev
BASE_URL=https://dev.example.com #Replace this with above link
TEST_USERNAME=testuser #replace with details given on above website
TEST_PASSWORD=testpass123 #replace with details given on above website
# .env.staging
BASE_URL=https://staging.example.com
TEST_USERNAME=staginguser
TEST_PASSWORD=stagingpass456
# .env.api - API Testing Environment
API_BASE_URL=https://reqres.in
SESSION_TIMEOUT=60000
INCLUDE_API_REPORTS=true
ARTIFACTS_DIR=test-artifacts
REPORT_OUTPUT_DIR=test-reports🆕 API Test Suite Examples
For API testing, create dedicated test suites:
API Smoke Test Suite (test-suites/api-smoke-tests.yml)
name: "API Smoke Test Suite"
description: "Quick API tests for reqres.in endpoints covering user management and authentication"
tags:
- api
- smoke
- reqres
- fast
environment: "api"
test-cases:
- "test-cases/api-user-management.yml"
- "test-cases/api-authentication-flow.yml"Variable Interpolation
Use {{VARIABLE_NAME}} syntax in step descriptions to inject environment variables:
steps:
- "Navigate to {{BASE_URL}}/login"
- "Enter username: {{TEST_USERNAME}}"YAML Validation
The framework includes a validation utility to ensure YAML files are properly structured.
Validation Rules
Step Libraries
- Must have
stepsarray - Parameters must be strings (if present)
- File must exist in
steps/directory
Test Cases
- Required fields:
name,steps - Optional fields:
description,tags,environment_variables tagsmust be an array (if present)stepsmust be an array- Referenced step libraries must exist
Test Suites
- Required fields:
name,test-cases - Optional fields:
description,tags,environment test-casesmust be an array- Referenced test case files must exist
Running Validation
# Validate specific test case
playwright-mcp-yaml-validator --test-case test-cases/user-login.yml
# Validate specific test suite
playwright-mcp-yaml-validator --test-suite test-suites/smoke-tests.yml
# Validate all files
playwright-mcp-yaml-validator --allValidation Output
✅ All validations passed!
# Or if errors exist:
❌ Validation Errors:
test-cases/user-login.yml: Missing required field 'name'
test-cases/user-login.yml: Step 1 references non-existent library 'nonexistent'Running Tests
Test Execution Commands
# Run specific UI test case
playwright-mcp-yaml-tester --test-case test-cases/user-login.yml
# Run specific API test case
playwright-mcp-yaml-tester --environment api --test-case test-cases/api-user-management.yml
# Run test suite
playwright-mcp-yaml-tester --test-suite test-suites/smoke-tests.yml
# Run API test suite
playwright-mcp-yaml-tester --environment api --test-suite test-suites/api-smoke-tests.yml
# Run all test cases
playwright-mcp-yaml-tester
# Run tests by type
playwright-mcp-yaml-tester --type ui
playwright-mcp-yaml-tester --type api
playwright-mcp-yaml-tester --type e2e
# Run with specific environment
playwright-mcp-yaml-tester --environment staging --test-case test-cases/user-login.yml
playwright-mcp-yaml-tester --environment api --test-case test-cases/api-authentication-flow.yml🆕 Enhanced Test Execution Features
Multi-Language Test Generation
The framework automatically generates Playwright test files in multiple languages after execution:
# Generated test files will be available in:
tests/
├── user-login.spec.ts # TypeScript (default)
├── user-login.spec.js # JavaScript
├── test_user_login.py # Python
├── UserLoginTest.cs # C#
└── UserLoginTest.java # JavaEnhanced UI Test Reports
- Screenshot Galleries: Automatic screenshot capture during verification steps
- Visual Comparisons: Before/after action screenshots
- Trace Viewer Integration: Interactive trace files for debugging
- Video Recordings: Full test execution videos (when enabled)
API Test Reports
- Request/Response Details: Complete HTTP transaction logs
- Performance Metrics: Response times and throughput
- Validation Results: Detailed assertion outcomes
- Session Context: API call chains and dependencies
Test Execution Flow
- Load environment configuration
- Parse YAML test files
- Resolve step libraries and interpolate variables
- Generate Gemini prompts for Playwright MCP execution
- Execute tests with visual feedback
- Collect and report results
Test Output
🚀 Running test suite: Smoke Test Suite
✓ Loaded environment: dev
✓ User Login Test
✅ Test execution completed- ✅ Test Results will also get generated in a Beautiful HTML along with Junit and JSON fromats as well, which will get genearted in "test-reports" folder in the root folder.
- ✅ Screenshots taken during the execution along with traces, will get saved in "test-artifacts" folder (or if you have configured a different in settings.json filw while configuring gemini + MCP server)
- ✅ For API Tests - A comperihensive HTML report get saved into reports folder with validations.
Best Practices
Step Library Design
- Keep step libraries focused on specific functionality
- Use descriptive names for step libraries
- Include parameter documentation
- Make steps atomic and reusable
Test Case Organization
- Use meaningful names and descriptions
- Apply appropriate tags for filtering
- Group related test cases in suites
- Include cleanup steps to maintain test isolation
Environment Management
- Use separate environment files for different stages
- Keep sensitive data in environment variables
- Document required environment variables
- Use consistent naming conventions
File Naming Conventions
- Use kebab-case for file names
- Include descriptive names:
user-login.yml,form-validation.yml - Group related files in appropriate directories
- Use consistent naming patterns
Troubleshooting
Common Issues
Step Library Not Found
- Verify the step library file exists in
steps/directory - Check spelling and file extension
- Verify the step library file exists in
Environment Variable Not Resolved
- Ensure variable is defined in appropriate
.envfile - Check variable name spelling and case sensitivity
- Ensure variable is defined in appropriate
Test Case Validation Errors
- Run validation before test execution
- Check required fields are present
- Verify file paths and references
Gemini Execution Failures
- Ensure Gemini CLI is installed and configured
- Check Playwright MCP server is running
- Verify network connectivity and permissions
Debug Mode
Enable detailed logging by setting environment variables:
DEBUG=true node playwright-mcp-yaml-tester --test-case test-cases/user-login.ymlAdvanced Features
Conditional Steps
steps:
- "If login form is visible, then fill credentials"
- "Otherwise, verify user is already logged in"Dynamic Content Handling
steps:
- "Wait for dynamic content to load"
- "Handle loading states gracefully"
- "Verify content appears as expected"Screenshot Capture
steps:
- "Take screenshot before critical action"
- "Perform action"
- "Take screenshot after action for comparison"Integration with CI/CD
The framework is designed for integration with continuous integration pipelines:
# Example CI script
npm install
node playwright-mcp-yaml-validator --all
NODE_ENV=ci playwright-mcp-yaml-tester --test-suite test-suites/smoke-tests.ymlExtending the Framework
Adding New Step Libraries
- Create new
.ymlfile insteps/directory - Follow the step library syntax
- Test with validation utility
- Include in test cases as needed
Custom Test Categories
Use tags to create custom test categories:
tags:
- regression
- api
- ui
- performance
- securityEnvironment-Specific Configurations
Create environment-specific test suites:
# test-suites/staging-full.yml
name: "Staging Full Test Suite"
environment: "staging"
test-cases:
- "test-cases/user-login.yml"
- "test-cases/api-integration.yml"Security Considerations
API Key Management
- Never commit API keys to version control
- Use environment variables or secure vaults
- Rotate keys regularly
Test Data
- Use dedicated test accounts
- Avoid using production data
- Implement data cleanup procedures
Network Security
- Configure firewall rules for MCP server
- Use HTTPS for all test endpoints
- Implement rate limiting if needed
This framework provides a robust, maintainable approach to test automation with clear separation of concerns and reusable components.
