@msalaam/xray-qe-toolkit
v1.4.0
Published
Full QE workflow toolkit for Xray Cloud integration — test management, Playwright integration, CI pipeline scaffolding, and browser-based review gates for API regression projects.
Maintainers
Readme
@msalaam/xray-qe-toolkit
Full QE workflow toolkit for Xray Cloud integration — test management, Postman generation, CI pipeline scaffolding, and browser-based review gates.
Table of Contents
- Overview
- Prerequisites
- Installation
- AI Setup (Optional)
- Quick Start
- Playwright Quick Start
- CLI Commands
- Configuration
- File Reference
- Multi-Company Test Format
- Jira/Xray Project Setup
- QE Workflow
- Building Regression Packs
- Working with Existing Xray Tests
- Idempotent Push
- Programmatic API
- Troubleshooting
Overview
@msalaam/xray-qe-toolkit (XQT) is a modular, CLI-driven toolkit for Xray Cloud and JIRA-based QE workflows. It provides:
tests.jsonas the single source-of-truth for test logic- Browser-based QE review gate (
edit-json) for human governance before any push - Idempotent push — updates existing tests, creates new ones, skips duplicates
- Postman collection generation from test definitions
- CI pipeline template for Azure DevOps (Newman + Xray import)
- Playwright integration — import Playwright JSON results with automatic test mapping
- OpenAPI contract enforcement —
compare-openapidiffs a live spec against a QA snapshot and fails the pipeline on breaking changes;update-snapshotpromotes a new baseline deliberately - Modular architecture — every function is importable for programmatic use
- AI-ready scaffolds — optional AI assistance for test generation (manual workflow fully supported)
QE Review Gate Philosophy
The knowledge/ folder is your single source of truth — OpenAPI specs, requirements docs, JIRA tickets, and business logic live here. The toolkit uses these sources to generate test cases with AI assistance.
Nothing is pushed to Xray until a QE manually reviews and approves via the edit-json command. This ensures quality governance and prevents untested automation from reaching Xray Cloud.
init → populate knowledge/ → gen-tests (AI) → edit-json (QE gate) → push-tests → gen-postman (AI) → CI (run + import)Generated tests are scaffolds — always review and enhance before pushing to Xray.
Prerequisites
- Node.js >= 18.0.0
- npm >= 8.0.0
- Xray Cloud API Key (from Xray > Settings > API Keys)
- JIRA Cloud API Token (from https://id.atlassian.com/manage-profile/security/api-tokens)
- Both credentials must belong to the same JIRA user (Xray impersonation requirement)
Installation
From npm (public)
# Install as dev dependency
npm install --save-dev @msalaam/xray-qe-toolkitVerify installation
npx xqt --version
npx xqt --helpAI Setup (Optional)
AI is completely optional. The toolkit works fully without AI — you can create and edit tests manually using the edit-json UI editor.
Current AI Status
🚧 All AI features are scaffolds — ready for implementation but not yet connected to AI providers.
What's included:
- ✅
--aiflags ingen-testsandgen-postmancommands - ✅
knowledge/folder scanner for API specs, requirements, tickets - ✅ MCP (Model Context Protocol) server scaffold in
mcp/server.js - ✅ VS Code chat participant extension scaffold in
.vscode/xray-qe-participant/ - ❌ Not included: Actual AI provider connections (Azure OpenAI, GitHub Copilot, etc.)
To Enable AI Features (Future)
Choose one of three integration paths:
Option 1: MCP Server (for GitHub Copilot CLI/Desktop)
- Implement AI provider in
mcp/server.js(Azure OpenAI, Anthropic, etc.) - Configure MCP client to connect to toolkit server
- Start server:
npx xqt mcp-server - Use
gen-tests --aiwith AI provider connected
Requires: MCP client setup, AI provider API keys (Azure OpenAI, etc.)
Option 2: VS Code Chat Participant (for GitHub Copilot in VS Code)
- Implement AI provider in
.vscode/xray-qe-participant/extension.js - Install extension: Copy folder to
~/.vscode/extensions/ - Reload VS Code
- Use
@xray-qein chat with commands like/generate,/postman
Requires: GitHub Copilot subscription, VS Code extension development
Option 3: Direct CLI Integration
- Modify
commands/genTests.jsto call your preferred AI service directly - Add provider credentials to
.env(e.g.,AZURE_OPENAI_KEY) - Use
gen-tests --aiwith integrated provider
Requires: AI provider API keys, custom implementation
Manual Workflow (No AI Required)
This workflow works TODAY without any AI setup:
# 1. Create tests manually using the UI editor
npx xqt init
npx xqt edit-json # Create tests from scratch or edit templates
# 2. Push to Xray
npx xqt push-tests
# 3. Generate Postman collection (no AI)
npx xqt gen-postman # Uses tests.json + xray-mapping.json directly
# 4. Run in CI
newman run collection.postman.jsonAll AI features gracefully degrade — without AI, commands provide helpful guidance for manual workflows.
Quick Start
With AI (Future — Requires Setup)
# 1. Initialize project with knowledge/ folder and starter files
npx xqt init
# 2. Configure credentials
cp .env.example .env
# Edit .env with your Xray + JIRA credentials
# 3. Add your API specs and requirements to knowledge/ folder
# See knowledge/README.md for supported file types
# 4. Set up AI provider (see "AI Setup" section above)
# ... implement AI connection in MCP server or VS Code extension ...
# 5. Generate test cases from knowledge sources (AI-assisted)
npx xqt gen-tests --ai
# 6. QE review gate — review and refine generated tests
npx xqt edit-json
# 7. Push approved tests to Xray Cloud
npx xqt push-tests
# 8. Generate Postman collection (AI-enhanced assertions)
npx xqt gen-postman --ai
# 9. Generate CI pipeline
npx xqt gen-pipeline
# 10. Run tests in CI
newman run collection.postman.json --reporters cli,junit
npx xqt import-results --file results.xml --testExecKey QE-123Without AI (Works Today)
# 1. Initialize project
npx xqt init
# 2. Configure credentials
cp .env.example .env
# Edit .env with your Xray + JIRA credentials
# 3. Create tests manually using UI editor
npx xqt edit-json
# - Click "Add Test" to create new tests
# - Use knowledge/ docs as reference (manual review)
# - Save when ready
# 4. Push to Xray Cloud
npx xqt push-tests
# 5. Generate Postman collection (from tests.json)
npx xqt gen-postman
# 6. Generate CI pipeline
npx xqt gen-pipeline
# 7. Run tests in CI
newman run collection.postman.json --reporters cli,junit
npx xqt import-results --file results.xml --testExecKey QE-123CLI Commands
All commands support these global options:
| Option | Description |
|---------------|------------------------------------|
| --verbose | Enable debug output |
| --env <path>| Custom path to .env file |
| --version | Show version number |
| --help | Show help for any command |
xqt init
Scaffold a new project with starter templates.
npx xqt initCreates:
knowledge/folder with subdirectories (api-specs/,requirements/,tickets/)knowledge/README.md— guide for organizing documentationXQT-GUIDE.md— getting started guide for this toolkit (never namedREADME.mdto avoid overwriting yours)tests.json— starter test definitions (marked as scaffolds)xray-mapping.json— empty mapping file.env.example— environment variable template.xrayrc— project-level config
Existing files are never overwritten — the command skips them with a warning.
xqt gen-tests
Generate test cases from knowledge/ folder documentation (AI-assisted or manual guidance).
# AI-assisted generation (requires AI provider setup — see "AI Setup" section)
npx xqt gen-tests --ai
# Focus on a specific OpenAPI spec
npx xqt gen-tests --ai --spec knowledge/api-specs/users-api.yaml
# Use a custom knowledge folder
npx xqt gen-tests --ai --knowledge ./docs
# Fetch and analyze a JIRA ticket
npx xqt gen-tests --ai --ticket APIEE-123
# Generate from a prompt
npx xqt gen-tests --ai --prompt "Generate tests for user authentication flows"| Option | Description | Required |
|---------------------|--------------------------------------------|----------|
| --ai | Enable AI-assisted generation | Yes |
| --spec <path> | OpenAPI/Swagger spec file | No |
| --knowledge <path>| Custom knowledge folder path | No |
| --ticket <key> | JIRA ticket key to fetch and analyze | No |
| --prompt <text> | Natural language prompt | No |
Output: Appends generated tests to tests.json (or creates it if it doesn't exist)
🚧 AI Provider Required: The --ai flag currently shows a scaffold message. To enable actual AI generation, implement an AI provider connection (see AI Setup).
Without AI: Run gen-tests without --ai for manual creation guidance, or use edit-json to create tests directly in the UI editor.
Existing files are never overwritten — the command skips them with a warning.
xqt edit-json
Launch the browser-based QE review gate editor.
npx xqt edit-json
npx xqt edit-json --port 3000| Option | Description | Default |
|-----------------|------------------------------------------|---------|
| --port <n> | Port for local editor server | Random |
Features:
- Add, edit, delete tests in a visual editor
- Tag tests:
regression,smoke,edge,critical,integration,e2e,security,performance - Toggle skip/push per test
- View Xray mapping status (which tests are already pushed)
- Real-time validation
- "Save & Exit" writes
tests.jsonand shuts down the server
Important: This is the QE governance gate. Nothing is pushed to Xray until the QE saves and explicitly runs push-tests.
xqt push-tests
Push or update tests in Xray Cloud from tests.json.
# Create new Test Execution and link all tests
npx xqt push-tests
# Link tests to an existing Test Execution
npx xqt push-tests --testExecKey QE-123
# Push tests without creating/linking any execution
npx xqt push-tests --skip-exec| Option | Description |
|---------------------|----------------------------------------------------|
| --testExecKey <key> | Link to an existing Test Execution instead of creating new |
| --skip-exec | Don't create or link any Test Execution |
Behavior:
- Tests with
"skip": trueare excluded - Tests already in
xray-mapping.jsonare updated (summary, description, labels, priority, steps) - New tests are created with type set to "Automated"
- Mapping is saved incrementally (crash-safe)
- 300ms rate-limit delay between API calls
xqt gen-postman
Generate a Postman Collection v2.1 JSON from tests.json (works with or without AI).
# Generate collection with JIRA keys embedded (run after push-tests)
npx xqt gen-postman
# Use a custom base URL
npx xqt gen-postman --base-url https://api.example.com
# AI-enhanced generation with better assertions
npx xqt gen-postman --ai
# Schema-driven generation from OpenAPI spec (no AI needed)
npx xqt gen-postman --spec knowledge/api-specs/api.yaml
# Use custom knowledge folder
npx xqt gen-postman --knowledge ./docs| Option | Description | Default |
|---------------------|---------------------------------|-----------------|
| --base-url <url> | Base URL for API requests | {{baseUrl}} |
| --ai | Enable AI-enhanced assertions | No |
| --spec <path> | OpenAPI spec for schema-driven generation | No |
| --knowledge <path>| Custom knowledge folder path | knowledge/ |
Output: collection.postman.json in project root
Behavior:
- Only generates for tests with
type: "api"(or unset, for backward compatibility) - Embeds JIRA keys from
xray-mapping.jsonwhen available (e.g.,[APIEE-6933] Test Summary) - Falls back to
test_idif not yet pushed to Xray - Each test becomes a folder, each step becomes a request with:
- Inferred HTTP method and endpoint from step data
- Pre-request scripts with step context
- Test scripts with assertions inferred from expected results
- SCAFFOLD markers for manual enhancement
Important: Generated collections are starting points — review and enhance assertions, environment variables, and edge cases before use.
xqt create-execution
Create a standalone Test Execution issue in JIRA.
npx xqt create-execution --summary "Sprint 24 Regression" --description "Full API regression"
npx xqt create-execution --summary "Feature XYZ" --issue QE-123| Option | Description | Required |
|---------------------|--------------------------------------------|----------|
| --summary <text> | Test Execution summary/title | Yes |
| --description <text> | Description | No |
| --issue <key> | Parent issue key to link the execution to | No |
xqt import-results
Import test results into Xray Cloud. Supports JUnit XML and Playwright JSON formats. Designed for CI — no human interaction.
Format Comparison
| Feature | JUnit XML | Playwright JSON |
|---------|-----------|----------------|
| Update existing tests | ❌ No - always creates new tests | ✅ Yes - via annotations |
| Test key mapping | ❌ Not supported | ✅ test.info().annotations |
| Attachments/Evidence | ❌ Not supported | ✅ Supported |
| Best for | Newman, generic test runners | Playwright tests |
| Recommendation | ⚠️ Use only for tools without test keys | ✅ Recommended for Playwright |
JUnit XML (Newman, generic test runners)
⚠️ Important: JUnit XML cannot update existing Xray tests - it will always create new test cases. Only use this for Newman or test runners that don't support test keys.
npx xqt import-results --file results.xml --testExecKey QE-123Playwright JSON (Recommended)
✅ Updates existing tests via annotations. Use this format to link results to your existing Xray test cases.
# Generate JSON report in Playwright
npx playwright test --reporter=json > playwright-results.json
# Import to Xray (updates existing tests via annotations)
npx xqt import-results --file playwright-results.json --testExecKey QE-123
# Create new Test Execution automatically
npx xqt import-results --file playwright-results.json --summary "Regression Suite"Required test code:
import { test, expect } from '@playwright/test';
test('Verify API endpoint', async ({ request }) => {
// THIS LINE links to existing Xray test
test.info().annotations.push({ type: 'xray', description: 'PROJ-123' });
// Your test code...
});| Option | Description | Required |
|-----------------------------|---------------------------------------------------|----------|
| --file <path> | Path to results file (.xml or .json) | Yes |
| --testExecKey <key> | Test Execution to link results to | No* |
| --summary <text> | Summary for new Test Execution (if no testExecKey) | No |
| --description <text> | Description for new Test Execution | No |
* If --testExecKey is omitted, a new Test Execution will be created automatically.
Mapping Playwright Tests to Xray:
To link Playwright test results to existing Xray test cases, you must use annotations:
// ✅ CORRECT - Updates existing test PROJ-123
test('User login flow', async ({ page }) => {
test.info().annotations.push({ type: 'xray', description: 'PROJ-123' });
// ... test code
});
// ✅ Alternative - Include test key in title
test('[PROJ-456] User registration validates email', async ({ page }) => {
// ... test code
});
// ❌ WRONG - Creates new test every time
test('User login flow', async ({ page }) => {
// Missing annotation - will create duplicate test!
// ... test code
});Without annotations: A new test will be created in Xray with the test title as the summary (not recommended for existing tests).
xqt gen-pipeline
Generate an Azure Pipelines YAML template.
npx xqt gen-pipeline
npx xqt gen-pipeline --output ci/azure-pipelines.yml| Option | Description | Default |
|---------------------|----------------------|------------------------|
| --output <path> | Output file path | azure-pipelines.yml |
The generated pipeline:
- Installs Node.js and dependencies
- Runs Newman against
collection.postman.json - Calls
import-resultswith environment variables - Publishes JUnit test results
Note: edit-json and QE review logic are NOT in the CI pipeline. Those happen pre-commit.
xqt mcp-server
Start a Model Context Protocol server for GitHub Copilot agent-mode integration.
# Start MCP server in stdio mode (for agent integration)
npx xqt mcp-server
# Start in HTTP mode for testing (optional)
npx xqt mcp-server --port 3100| Option | Description | Default |
|---------------------|----------------------|------------------------|
| --port <n> | HTTP port (testing) | stdio mode |
MCP Tools Exposed:
generate_test_cases— Generate tests from knowledge/ sourcesgenerate_postman— Generate Postman collection with AI assertionsanalyze_knowledge— List and summarize knowledge sourcespush_tests— Push test cases to Xray Cloudanalyze_spec— Parse OpenAPI spec and extract endpoints
Status: 🚧 Scaffold ready — AI provider connection not yet implemented. See AI Setup for implementation guidance.
Learn more: Model Context Protocol
Alternative: .vscode/xray-qe-participant/ contains a VS Code chat participant scaffold for GitHub Copilot integration.
xqt compare-openapi
Compare an OpenAPI spec against an approved QA snapshot and fail the pipeline if breaking changes are detected.
# Basic comparison — exits 1 on breaking changes
npx xqt compare-openapi \
--current ../api-repo/openapi.yaml \
--snapshot ./openapi.snapshot.yaml
# Save a JSON diff report as a pipeline artifact
npx xqt compare-openapi \
--current ../api-repo/openapi.yaml \
--snapshot ./openapi.snapshot.yaml \
--report openapi-diff-report.json| Option | Description | Required |
|----------------------|------------------------------------------------------------------|----------|
| --current <path> | Path to the current (live) OpenAPI spec | Yes |
| --snapshot <path> | Path to the approved QA baseline snapshot | Yes |
| --report <path> | Write full diff results to a JSON file (useful as CI artifact) | No |
Behaviour:
- Snapshot = approved QA contract baseline
- Current = proposed/live spec
- Exits
0if no breaking changes; logs a count of non-breaking differences - Exits
1on any breaking change and prints the full diff - If
--reportis specified and breaking changes are found, a JSON report is written
Example Azure Pipelines step (from your test repo pipeline):
steps:
- checkout: self
- checkout: git://Project/portfolio-api
path: api-repo
- script: |
npx xqt compare-openapi \
--current api-repo/openapi.yaml \
--snapshot openapi.snapshot.yaml \
--report openapi-diff-report.json
displayName: Compare OpenAPI Contracts
- publish: openapi-diff-report.json
artifact: openapi-diff
condition: failed()Governance model:
- Dev repo is never modified by QE
- QE test repo pipeline checks out the API repo and enforces the contract
- Breaking change → pipeline fails → dev must fix spec or raise a contract change review
- QE then runs
update-snapshotto promote the new baseline
xqt update-snapshot
Overwrite the QA snapshot baseline with the current spec. Always a deliberate, manual action — never called automatically.
npx xqt update-snapshot \
--current ../api-repo/openapi.yaml \
--snapshot ./openapi.snapshot.yaml| Option | Description | Required |
|----------------------|-----------------------------------------------------|----------|
| --current <path> | Path to the current (live) OpenAPI spec to promote | Yes |
| --snapshot <path> | Path to the snapshot file to overwrite | Yes |
Workflow:
- Dev raises a contract change review
- QE approves the new contract
- QE runs
update-snapshotlocally - QE raises a PR in the test repo with the updated snapshot
- PR is reviewed and merged — new baseline is established
# After merging the approved contract change:
npx xqt update-snapshot \
--current ../api-repo/openapi.yaml \
--snapshot ./openapi.snapshot.yaml
git add openapi.snapshot.yaml
git commit -m "chore: promote OpenAPI snapshot to v2.5.0"
git pushConfiguration
Environment Variables (.env)
Create a .env file in your project root (or copy .env.example):
| Variable | Required | Description |
|-------------------|----------|------------------------------------------------|
| XRAY_ID | Yes | Xray Cloud API Client ID |
| XRAY_SECRET | Yes | Xray Cloud API Client Secret |
| JIRA_PROJECT_KEY | Yes | JIRA project key (e.g., APIEE, QE) |
| JIRA_URL | Yes | JIRA instance URL (e.g., https://your-domain.atlassian.net) |
| JIRA_API_TOKEN | Yes | JIRA API token |
| JIRA_EMAIL | Yes | JIRA user email (must match Xray API Key owner) |
| XRAY_GRAPHQL_URL | No | Region-specific GraphQL endpoint (default: US) |
Region-specific GraphQL endpoints:
| Region | URL |
|--------|-----|
| US (default) | https://us.xray.cloud.getxray.app/api/v2/graphql |
| EU | https://eu.xray.cloud.getxray.app/api/v2/graphql |
| AU | https://au.xray.cloud.getxray.app/api/v2/graphql |
Project Config (.xrayrc)
Optional JSON file for non-sensitive project settings:
{
"testsPath": "tests.json",
"mappingPath": "xray-mapping.json",
"collectionPath": "collection.postman.json"
}File Reference
tests.json
Source-of-truth for test definitions. Created by init, edited by edit-json, consumed by push-tests and gen-postman.
{
"testExecution": {
"summary": "Sprint 24 - Automated Regression Suite",
"description": "API regression tests for Sprint 24 release"
},
"tests": [
{
"test_id": "TC-API-GET-001",
"skip": false,
"tags": ["regression", "smoke"],
"xray": {
"summary": "Verify GET /api/resource returns 200",
"description": "Test that the API returns expected data.",
"priority": "High",
"labels": ["API", "GET", "Regression"],
"steps": [
{
"action": "Send GET request to /api/resource/123",
"data": "Method: GET, Headers: Authorization: Bearer {token}",
"expected_result": "200 OK with resource object"
}
]
}
}
]
}Fields:
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| test_id | string | Yes | Unique identifier (alphanumeric, hyphens, underscores) |
| skip | boolean | No | If true, excluded from push-tests |
| tags | string[] | No | QE tags: regression, smoke, edge, critical, etc. |
| xray.summary | string | Yes | JIRA issue title |
| xray.description | string | No | JIRA issue description |
| xray.priority | string | No | Highest, High, Medium, Low, Lowest |
| xray.labels | string[] | No | JIRA labels |
| xray.steps[].action | string | Yes | What action to perform |
| xray.steps[].data | string | No | Input data / parameters |
| xray.steps[].expected_result | string | Yes | Expected outcome |
xray-mapping.json
Maps test_id → JIRA issue { key, id }. Generated by push-tests, used for idempotent updates.
{
"TC-API-GET-001": { "key": "APIEE-6933", "id": "1865623" },
"TC-API-POST-001": { "key": "APIEE-6934", "id": "1865627" },
"_testexecution": { "key": "APIEE-6941", "id": "1865637" }
}Multi-Company Test Format
This toolkit pushes tests into one JIRA project per run (via JIRA_PROJECT_KEY). For multiple companies, use consistent naming and keep per-company configs and files.
Recommended conventions:
test_id: Prefix with company + domain + sequence, e.g.ACME-BILLING-PAYMENTS-001xray.summary: Start with company or product tag, e.g.[ACME] Payments - create invoicexray.labels: Addcompany:<slug>,system:<slug>,team:<slug>for filteringtestExecution.summary: Include company + release/sprint, e.g.ACME - Sprint 12 Regression
Example snippet:
{
"testExecution": {
"summary": "ACME - Sprint 12 Regression"
},
"tests": [
{
"test_id": "ACME-BILLING-PAYMENTS-001",
"tags": ["regression", "smoke"],
"xray": {
"summary": "[ACME] Payments - create invoice",
"labels": ["company:acme", "system:billing", "team:payments"],
"steps": [
{
"action": "Send POST /payments/invoices",
"expected_result": "201 Created"
}
]
}
}
]
}Per-company setup options:
- Separate folders (recommended): run
xray-qe initonce per company and keeptests.json,xray-mapping.json,.env, and.xrayrcisolated. - Shared repo: use a company-specific
.envand.xrayrc(swap before running). Example.xrayrc:
{
"testsPath": "tests.acme.json",
"mappingPath": "xray-mapping.acme.json",
"collectionPath": "collection.acme.postman.json"
}Jira/Xray Project Setup
Use this checklist to align a new team's board and Xray configuration with the toolkit:
- Create a JIRA project per company or business unit (Software or Service project).
- Enable Xray for the project and confirm issue types: Test and Test Execution (optional: Test Plan).
- Configure screens/fields to include Summary, Description, Priority, Labels, and Test Steps.
- Set permissions so the API user can create/edit Test and Test Execution issues.
- Define components/labels that match your naming conventions (company, system, team).
- Create a board with a filter like
project = KEY AND issuetype in (Test, "Test Execution")and use components/labels for swimlanes.
QE Workflow
┌─────────────────────────────────────────────────────────────────────────┐
│ QE WORKFLOW (LOCAL) │
│ │
│ 1. npx xqt init ← scaffold project + knowledge/ folder │
│ 2. Configure .env ← credentials │
│ 3. Populate knowledge/ ← add API specs, requirements, tickets │
│ • knowledge/api-specs/ (OpenAPI, Swagger) │
│ • knowledge/requirements/ (BRDs, logic docs) │
│ • knowledge/tickets/ (JIRA exports, Confluence) │
│ 4. npx xqt gen-tests --ai ← AI generates test cases from knowledge│
│ 5. npx xqt edit-json ← QE REVIEW GATE (browser UI) │
│ • Review AI-generated tests │
│ • Add/edit/delete tests │
│ • Tag tests (regression, critical, etc.) │
│ • Mark skip/push per test │
│ • Save & Exit │
│ 6. npx xqt push-tests ← push to Xray Cloud │
│ 7. npx xqt gen-postman --ai ← generate Postman collection │
│ 8. git commit & push ← CI picks up from here │
│ │
├─────────────────────────────────────────────────────────────────────────┤
│ CI PIPELINE (AUTOMATED) │
│ │
│ 9. npm ci ← install deps │
│ 10. newman run collection.postman.json --reporters junit │
│ 11. npx xqt import-results --file results.xml --testExecKey QE-123│
│ │
├─────────────────────────────────────────────────────────────────────────┤
│ CONTRACT ENFORCEMENT (TEST REPO PIPELINE) │
│ │
│ 12. Checkout API repo │
│ 13. npx xqt compare-openapi │
│ --current api-repo/openapi.yaml │
│ --snapshot openapi.snapshot.yaml │
│ → Fails pipeline on breaking changes │
│ │
│ (When QE approves a contract change) │
│ 14. npx xqt update-snapshot │
│ --current api-repo/openapi.yaml │
│ --snapshot openapi.snapshot.yaml │
│ → Commit updated snapshot + raise PR │
│ │
└─────────────────────────────────────────────────────────────────────────┘Playwright Quick Start
Complete Setup for Updating Existing Xray Tests
This is the recommended workflow for teams using Playwright with existing Xray test cases.
Step 1: Install Playwright (in your test repo)
npm install --save-dev @playwright/testStep 2: Configure Playwright
Create playwright.config.ts in your test repo:
import { defineConfig } from '@playwright/test';
export default defineConfig({
reporter: [
['html'], // For local viewing
['json', { outputFile: 'playwright-results.json' }], // For Xray import
],
use: {
baseURL: process.env.API_BASE_URL || 'https://your-api.com',
extraHTTPHeaders: {
'Authorization': `Bearer ${process.env.API_TOKEN}`,
'X-API-Key': process.env.API_KEY || '',
},
},
});Step 3: Write Tests with Xray Annotations
⚠️ Critical: Every test MUST have an annotation to update existing Xray tests.
import { test, expect } from '@playwright/test';
test.describe('Regression Tests', () => {
test('Verify API returns 200 for valid request', async ({ request }) => {
// THIS LINE links to your existing Xray test
test.info().annotations.push({ type: 'xray', description: 'APIEE-7131' });
const response = await request.post('/api/verify', {
data: { userId: '123', action: 'validate' }
});
// Attach response as evidence for Xray
test.info().attach('response-evidence', {
body: JSON.stringify({
status: response.status(),
headers: response.headers(),
body: await response.json()
}, null, 2),
contentType: 'application/json'
});
expect(response.status()).toBe(200);
});
test('Verify API returns 400 for invalid data', async ({ request }) => {
test.info().annotations.push({ type: 'xray', description: 'APIEE-7132' });
// ... test implementation
});
});Step 4: Map All Your Tests
Based on your xray-mapping.json, add annotations to each test:
// If your xray-mapping.json shows:
// "TC-001": { "key": "APIEE-7131", "id": "1879092" }
// "TC-002": { "key": "APIEE-7132", "id": "1879095" }
test('Test Case 1', async ({ request }) => {
test.info().annotations.push({ type: 'xray', description: 'APIEE-7131' });
// ...
});
test('Test Case 2', async ({ request }) => {
test.info().annotations.push({ type: 'xray', description: 'APIEE-7132' });
// ...
});Step 5: Run Locally
# Run tests
npx playwright test
# View HTML report
npx playwright show-report
# Upload to Xray (updates existing tests)
npx xqt import-results --file playwright-results.json --testExecKey APIEE-6811What happens:
- ✅ Tests WITH annotations (
test.info().annotations.push(...)) → Updates existing Xray tests - ⏭️ Tests WITHOUT annotations → Automatically skipped (won't create duplicates)
- 📊 Summary shows: Passed, Failed, Skipped counts
- 🔗 Direct link to view results in Xray
Verbose mode (see exactly what's being uploaded):
npx xqt import-results \
--file playwright-results.json \
--testExecKey APIEE-6811 \
--verboseThis saves playwright-results-xray-debug.json for inspection.
Step 6: Configure CI/CD
Azure Pipelines:
steps:
- task: NodeTool@0
inputs:
versionSpec: '18.x'
- script: npm ci
displayName: 'Install dependencies'
- script: npx playwright test
displayName: 'Run Playwright tests'
continueOnError: true
- script: |
npx xqt import-results \
--file playwright-results.json \
--testExecKey APIEE-6811
displayName: 'Upload results to Xray'
env:
XRAY_ID: $(XRAY_ID)
XRAY_SECRET: $(XRAY_SECRET)
JIRA_PROJECT_KEY: $(JIRA_PROJECT_KEY)
JIRA_URL: $(JIRA_URL)
JIRA_API_TOKEN: $(JIRA_API_TOKEN)
JIRA_EMAIL: $(JIRA_EMAIL)GitHub Actions:
steps:
- uses: actions/setup-node@v3
with:
node-version: '18'
- run: npm ci
- run: npx playwright test
- run: |
npx xqt import-results \
--file playwright-results.json \
--testExecKey APIEE-6811
env:
XRAY_ID: ${{ secrets.XRAY_ID }}
XRAY_SECRET: ${{ secrets.XRAY_SECRET }}
JIRA_PROJECT_KEY: ${{ secrets.JIRA_PROJECT_KEY }}
JIRA_URL: ${{ secrets.JIRA_URL }}
JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}
JIRA_EMAIL: ${{ secrets.JIRA_EMAIL }}Benefits
✅ Updates existing tests - No duplicate test creation
✅ Automatic mapping - Via annotations in test code
✅ Evidence attachments - Screenshots, responses, traces
✅ Detailed reporting - Retries, worker info, error details
✅ CI/CD ready - Standard workflow for automation
✅ Single source of truth - Test code + Xray together
Building Regression Packs
Regression packs are curated test suites that verify your system still works after changes. The toolkit makes it easy to build and maintain regression packs using tags, AI-generated tests, and idempotent push.
1. Organize Knowledge Sources by Domain
knowledge/
├── api-specs/
│ ├── auth-api.yaml ← Authentication domain
│ ├── users-api.yaml ← User management
│ └── payments-api.yaml ← Payment processing
├── requirements/
│ ├── auth-flows.md
│ ├── user-roles.md
│ └── payment-validation.md
└── tickets/
├── APIEE-123.json ← Login epic
├── APIEE-456.json ← Payment refactor epic
└── confluence-sso.html2. Generate Tests by Domain
# Generate auth tests from auth spec
npx xqt gen-tests --ai --spec knowledge/api-specs/auth-api.yaml
# Generate payment tests
npx xqt gen-tests --ai --spec knowledge/api-specs/payments-api.yaml
# Generate from a specific ticket
npx xqt gen-tests --ai --ticket APIEE-1233. Tag Tests for Pack Categorization
Use edit-json to assign tags:
| Tag | Purpose |
|-----|---------|
| regression | Full regression pack — all core functionality |
| smoke | Smoke test pack — critical paths only |
| critical | Critical business flows (subset of regression) |
| edge | Edge case and error handling tests |
| integration | Multi-system integration tests |
| security | Security and auth tests |
| performance | Performance/load tests |
Example workflow:
- Generate tests:
npx xqt gen-tests --ai - Open editor:
npx xqt edit-json - Add
regressiontag to all tests - Add
smoketag to critical path tests - Add
criticaltag to business-critical tests - Save and push to Xray
4. Filter and Run Specific Packs
In tests.json:
{
"tests": [
{
"test_id": "001",
"type": "api",
"tags": ["regression", "smoke", "critical"]
},
{
"test_id": "002",
"type": "api",
"tags": ["regression", "edge"]
},
{
"test_id": "003",
"type": "api",
"tags": ["regression"],
"skip": true
}
]
}Filtering in edit-json:
- Use the dropdown to filter by tag (e.g., show only
smoketests) - Mark tests as
skip: trueto exclude from push/generation
CI pipeline filtering:
- Generate smoke pack: filter
tests.jsontotags.includes("smoke")beforegen-postman - Generate regression pack: filter to
tags.includes("regression")andskip !== true - Schedule different packs on different cadences (smoke nightly, regression weekly)
5. Maintain Packs Over Sprints
Idempotent push keeps Xray in sync as your pack evolves:
| Sprint Change | Action | Result |
|---------------|--------|--------|
| API endpoint added | gen-tests --ai --spec new-api.yaml → edit-json → push-tests | New tests created in Xray |
| Test assertion updated | Edit in edit-json → push-tests | Existing test updated in Xray |
| Test deprecated | Mark skip: true in edit-json → push-tests | Test excluded from future runs |
| Requirements changed | Update knowledge/requirements/ → gen-tests --ai | Regenerate affected tests |
Best practices:
- ✅ Regenerate tests when specs change (toolkit updates existing tests)
- ✅ Use meaningful
test_idvalues (AUTH-LOGIN-001instead of001) - ✅ Commit
tests.jsonandxray-mapping.jsonto source control - ✅ Review AI-generated tests before pushing — they're scaffolds, not production-ready
- ✅ Keep
knowledge/up to date with your latest specs and docs
6. Example: Sprint Regression Pack Workflow
# Sprint start: Generate tests from updated specs
npx xqt gen-tests --ai
# QE reviews and tags tests
npx xqt edit-json
# → Tag new tests with "regression"
# → Mark experimental tests as "skip"
# → Verify all critical paths have "smoke" tag
# Push to Xray (creates new, updates existing)
npx xqt push-tests
# Generate Postman collection for CI
npx xqt gen-postman --ai
# Commit regression pack to repo
git add tests.json xray-mapping.json collection.postman.json
git commit -m "Sprint 24 regression pack"
# CI runs nightly
newman run collection.postman.json --folder "[smoke]"
newman run collection.postman.json # Full regression weeklyWorking with Existing Xray Tests
If your team already has test cases in Xray Cloud that were created manually or by another tool, you can set up this toolkit to manage and update those existing tests.
Step 1: Query Existing Tests from Xray
Use the Xray GraphQL API to fetch your existing tests. Here's a script to generate the mapping file:
fetch-existing-tests.js:
import { authenticate, loadConfig } from "@msalaam/xray-qe-toolkit";
import fs from "fs";
const cfg = loadConfig();
const token = await authenticate(cfg);
// GraphQL query to fetch all tests in your project
const query = `
query {
getTests(jql: "project = ${cfg.jiraProjectKey} AND issuetype = Test", limit: 1000) {
total
results {
issueId
jira(fields: ["key", "summary", "description", "priority", "labels"])
}
}
}
`;
const response = await fetch(cfg.xrayGraphQLUrl || "https://us.xray.cloud.getxray.app/api/v2/graphql", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${token}`,
},
body: JSON.stringify({ query }),
});
const data = await response.json();
const tests = data.data.getTests.results;
// Generate xray-mapping.json
const mapping = {};
tests.forEach((test) => {
const testId = test.jira.key; // Use JIRA key as test_id initially
mapping[testId] = {
key: test.jira.key,
id: test.issueId,
};
});
fs.writeFileSync("xray-mapping.json", JSON.stringify(mapping, null, 2));
console.log(`✓ Created xray-mapping.json with ${tests.length} tests`);
// Generate tests.json scaffold
const testsJson = {
testExecution: {
summary: "Existing Tests - Managed by Toolkit",
description: "Imported from existing Xray tests",
},
tests: tests.map((test) => ({
test_id: test.jira.key,
skip: false,
tags: [],
xray: {
summary: test.jira.summary,
description: test.jira.description || "",
priority: test.jira.priority?.name || "Medium",
labels: test.jira.labels || [],
steps: [
// You'll need to fetch test steps separately via another API call
{
action: "PLACEHOLDER - Edit this in npx xqt edit-json",
data: "",
expected_result: "PLACEHOLDER",
},
],
},
})),
};
fs.writeFileSync("tests.json", JSON.stringify(testsJson, null, 2));
console.log(`✓ Created tests.json with ${tests.length} test scaffolds`);
console.log("\nNext steps:");
console.log("1. Run 'npx xqt edit-json' to review and complete test steps");
console.log("2. Run 'npx xqt push-tests' to update tests in Xray");Step 2: Run the Script
# Save the script above as fetch-existing-tests.js
node fetch-existing-tests.jsThis generates:
xray-mapping.json— maps your existing JIRA test keys to their IDstests.json— scaffold with summaries, descriptions, priorities, labels
Step 3: Complete Test Steps
The script can't fetch test steps automatically (requires additional API calls). Complete them in the editor:
npx xqt edit-json
# Edit each test to add proper steps
# Save when doneStep 4: Update Existing Tests
Now you can update your existing Xray tests:
npx xqt push-testsBecause the tests are in xray-mapping.json, they'll be updated (not duplicated).
Alternative: Manual Mapping Creation
If you have a small number of tests, create the mapping manually:
xray-mapping.json:
{
"APIEE-6933": { "key": "APIEE-6933", "id": "1865623" },
"APIEE-6934": { "key": "APIEE-6934", "id": "1865627" },
"APIEE-6935": { "key": "APIEE-6935", "id": "1865628" }
}tests.json:
{
"tests": [
{
"test_id": "APIEE-6933",
"xray": {
"summary": "Test User Login",
"steps": [
{ "action": "...", "expected_result": "..." }
]
}
}
]
}Then run npx xqt push-tests to update.
Future Enhancement: Pull Command
A pull-tests command to automatically fetch and sync existing tests is planned:
# Future feature (not yet implemented)
npx xqt pull-tests --project APIEE
npx xqt pull-tests --jql "project = APIEE AND labels = regression"This would automatically generate both tests.json and xray-mapping.json from Xray.
Want this feature? Open an issue on GitHub or contribute via PR.
Idempotent Push
push-tests checks xray-mapping.json before each operation:
| Scenario | Action |
|----------|--------|
| test_id not in mapping | Create new JIRA Test issue + steps |
| test_id in mapping | Update existing issue fields + replace steps |
| skip: true | Skip entirely |
This means you can safely run push-tests multiple times without creating duplicates.
Programmatic API
All library functions are importable for custom scripts:
import {
loadConfig,
validateConfig,
authenticate,
createIssue,
buildAndPush,
generatePostmanCollection,
logger,
} from "@msalaam/xray-qe-toolkit";
const cfg = loadConfig();
validateConfig(cfg);
const token = await authenticate(cfg);
// ... use any exported functionTroubleshooting
"disallowed to impersonate" / "no valid active user exists"
Your JIRA_EMAIL doesn't match the Xray API Key owner.
Fix:
- Ensure
JIRA_EMAILmatches the email of the user who created the Xray API Key - Verify the user has an active Xray license
- Regenerate the Xray API Key with the same user as
JIRA_API_TOKEN
"issueId provided is not valid" (transient)
Xray's GraphQL API needs time to index newly created JIRA issues. The toolkit automatically retries with exponential backoff (2s → 4s → 8s → 16s → 32s).
If it still fails after 5 retries, wait a minute and try again.
Rate limiting / 429 errors
The toolkit includes a 300ms delay between API calls. If you still hit rate limits, wait and retry. For very large test suites (100+), consider splitting across multiple runs.
Browser doesn't open for edit-json
In headless environments, copy the URL printed in the terminal and open it manually.
License
See LICENSE.
