ultimate-playwright-performance
v1.0.3
Published
A comprehensive performance testing utility for Playwright that captures Web Vitals, action timings, resource metrics, and API calls with configurable SLA thresholds
Downloads
69
Maintainers
Readme
ultimate-playwright-performance
A comprehensive performance testing utility for Playwright that captures Web Vitals, action timings, resource metrics, and API calls with configurable SLA thresholds.
Installation
npm install ultimate-playwright-performanceQuick Start
Option 1: Direct Usage with Default Thresholds (Simplest)
import { PerformanceHelper } from 'ultimate-playwright-performance';
import { test, expect } from '@playwright/test';
test.describe('Performance Tests', () => {
let perf;
test.beforeEach(async ({ page }) => {
// Uses default thresholds - no config needed!
perf = new PerformanceHelper(page);
});
test('measure page performance', async ({ page }) => {
// Start capture BEFORE navigation
await perf.startCapture();
// Navigate and perform actions
await page.goto('https://example.com');
// Track specific actions
perf.startAction('Login');
// ... perform login steps
perf.endAction('Login');
// Generate reports in afterEach
});
test.afterEach(async ({ page }) => {
await perf.getMetrics();
perf.printActionTimings();
perf.printApiSummary();
await perf.printSummary();
const timestamp = new Date().toISOString().split('T')[0];
await perf.generateReport(`test-${timestamp}.json`);
await perf.generateReport(`test-${timestamp}.html`, 'html');
});
});Option 2: Using the Fixture Pattern (No Config Required)
import { test as base, expect } from '@playwright/test';
import { createPerformanceFixture } from 'ultimate-playwright-performance';
// Uses default thresholds - works out of the box!
const test = base.extend({
perf: createPerformanceFixture(),
});
test.describe('Performance Tests', () => {
test('measure page performance', async ({ page, perf }) => {
// Performance helper is automatically available and initialized
// Auto-capture starts when page loads
// Reports are automatically generated after test completes!
await page.goto('https://example.com');
// Track specific actions
perf.startAction('Login');
// ... perform login steps
perf.endAction('Login');
// No afterEach needed - reports are generated automatically!
});
});Option 3: Using the Default Fixture (No Config, Zero Setup!)
import { test, expect } from 'ultimate-playwright-performance/fixture';
test.describe('Performance Tests', () => {
test('measure page performance', async ({ page, performance }) => {
// Performance helper with default thresholds
// Auto-capture and auto-reporting enabled by default!
await page.goto('https://example.com');
performance.startAction('Login');
// ... perform login steps
performance.endAction('Login');
// Reports (JSON, HTML, CSV) are automatically generated after test!
});
});Automatic Report Generation
Good news: Reports are automatically generated after each test completes! No afterEach hook needed! 🎉
By default, the fixture automatically:
- ✅ Captures all performance metrics
- ✅ Prints summaries to console (action timings, API summary, full summary)
- ✅ Generates reports in all formats (JSON, HTML, CSV)
- ✅ Names files based on test title and timestamp
Example:
const test = base.extend({
perf: createPerformanceFixture(),
});
test('my performance test', async ({ perf }) => {
perf.startAction('Login');
// ... perform actions
perf.endAction('Login');
// Reports are automatically generated after test completes!
// No afterEach hook needed!
});Disable Automatic Reporting
If you want to generate reports manually (e.g., in a custom afterEach hook):
const test = base.extend({
perf: createPerformanceFixture({
autoReport: false // Disable automatic reporting
}),
});
test('my test', async ({ perf }) => {
// ... test code
});
test.afterEach(async ({ perf }, testInfo) => {
// Generate reports manually
await perf.getMetrics();
perf.setTestName(testInfo.title);
await perf.generateReport('custom-report.json', ['json', 'html', 'csv']);
});Customize Report Formats
Choose which formats to generate automatically:
const test = base.extend({
perf: createPerformanceFixture({
reporters: ['csv', 'html'] // Only generate CSV and HTML (skip JSON)
}),
});Report Formats
CSV Reports
Generate CSV reports in playwright-performance format with test names, action names, start times, end times, and durations:
// Set test name (required for CSV reporting)
perf.setTestName('My Test Name');
// Generate CSV report
await perf.generateReport('performance-results.csv', 'csv');
// Or generate multiple formats at once
await perf.generateReport('test-results.json', ['csv', 'html', 'json']);CSV Format:
Test Name,Action Name,Start Time,End Time,Duration (ms)
My Test Name,Login,2024-01-15T10:00:00.000Z,2024-01-15T10:00:02.500Z,2500
My Test Name,Search,2024-01-15T10:00:03.000Z,2024-01-15T10:00:03.800Z,800Automatic CSV Reporting:
// CSV reports are automatically generated when using the fixture
// Test name is automatically set from testInfo.title
const test = base.extend({
perf: createPerformanceFixture(), // CSV included by default
});Manual CSV Reporting (if autoReport is disabled):
test.afterEach(async ({ perf }, testInfo) => {
await perf.getMetrics();
// Set test name from testInfo
perf.setTestName(testInfo.title);
const timestamp = new Date().toISOString().split('T')[0];
const testName = testInfo.title.replace(/[^a-zA-Z0-9]/g, '-').toLowerCase();
// Generate CSV, HTML, and JSON reports
await perf.generateReport(`${testName}-${timestamp}.json`, ['csv', 'html', 'json']);
});HTML Reports
Generate beautiful HTML reports with detailed metrics, charts, and visual indicators:
await perf.generateReport('report.html', 'html');JSON Reports
Generate machine-readable JSON reports for CI/CD integration:
await perf.generateReport('report.json', 'json');Multiple Formats
Generate multiple report formats in a single call:
// Generate all formats
await perf.generateReport('test-results.json', ['json', 'html', 'csv']);
// Or specify individual formats
const [jsonPath, htmlPath, csvPath] = await perf.generateReport('results.json', ['json', 'html', 'csv']);Consolidated Reports
Generate consolidated performance reports from multiple test runs:
# Via npm script (if configured in your package.json)
npm run report:consolidated
# Direct execution
node node_modules/ultimate-playwright-performance/bin/generate-consolidated-report.js
# With custom pattern
node node_modules/ultimate-playwright-performance/bin/generate-consolidated-report.js --pattern "*-2024-12-*.json"
# With custom output
node node_modules/ultimate-playwright-performance/bin/generate-consolidated-report.js --pattern "district-coordinator-*.json" --output "district-coordinator-summary.html"Or use the CLI command (if installed globally or via npx):
npx ultimate-playwright-performance generate-consolidated-reportThe script will:
- Look for all JSON performance reports in
.artifacts/performance-reports/ - Aggregate metrics across multiple test runs
- Generate a single HTML report with side-by-side comparisons
- Show trends, averages, and identify performance regressions
Configuration
⚡ Default Thresholds (Works Out of the Box!)
Good news: The package works immediately with zero configuration! Default thresholds are built-in and based on Core Web Vitals recommendations:
- LCP (Largest Contentful Paint): Good < 2.5s, Needs Improvement < 4.0s
- FCP (First Contentful Paint): Good < 1.8s, Needs Improvement < 3.0s
- FID (First Input Delay): Good < 100ms, Needs Improvement < 300ms
- CLS (Cumulative Layout Shift): Good < 0.1, Needs Improvement < 0.25
- TTFB (Time to First Byte): Good < 800ms, Needs Improvement < 1.8s
- INP (Interaction to Next Paint): Good < 200ms, Needs Improvement < 500ms
- Action Timing: Good < 1s, Needs Improvement < 3s
You can start using it right away:
// Direct usage - defaults work!
const perf = new PerformanceHelper(page);
// Fixture pattern - defaults work!
const test = base.extend({
perf: createPerformanceFixture(),
});Customizing Thresholds
If you need different thresholds, you have several options:
Option 1: Inline Custom Thresholds
Pass thresholds directly when creating the helper:
// Direct usage with custom thresholds
const perf = new PerformanceHelper(page, {
LCP: { good: 2000, needsImprovement: 3500 },
FCP: { good: 1500, needsImprovement: 2800 },
actionTiming: { good: 800, needsImprovement: 2500 },
});
// Fixture with custom thresholds
const test = base.extend({
perf: createPerformanceFixture({
thresholds: {
LCP: { good: 2000, needsImprovement: 3500 },
actionTiming: { good: 800, needsImprovement: 2500 },
},
}),
});Option 2: Using Package Default Thresholds
Import and use the default thresholds from the package:
import { defaultThresholds } from 'ultimate-playwright-performance';
// Use defaults directly
const perf = new PerformanceHelper(page, defaultThresholds);
// Or override specific values
const perf = new PerformanceHelper(page, {
...defaultThresholds,
actionTiming: { good: 800, needsImprovement: 2500 },
});Option 3: Environment-Based Config File (Recommended for Teams)
Create a config file in your project (e.g., tests/config/performance-thresholds.js):
// tests/config/performance-thresholds.js
export const performanceThresholds = {
default: {
LCP: { good: 2500, needsImprovement: 4000 },
FCP: { good: 1800, needsImprovement: 3000 },
FID: { good: 100, needsImprovement: 300 },
CLS: { good: 0.1, needsImprovement: 0.25 },
TTFB: { good: 800, needsImprovement: 1800 },
INP: { good: 200, needsImprovement: 500 },
actionTiming: { good: 1000, needsImprovement: 3000 },
},
production: {
// Stricter thresholds for production
LCP: { good: 2000, needsImprovement: 3500 },
// ... other metrics
},
staging: {
// Relaxed thresholds for staging
LCP: { good: 3000, needsImprovement: 5000 },
// ... other metrics
},
development: {
// Most relaxed for local dev
LCP: { good: 4000, needsImprovement: 6000 },
// ... other metrics
},
};
export function getThresholdsForEnvironment(env = null) {
const environment = env || process.env.environment || process.env.TEST_ENV || 'default';
return performanceThresholds[environment] || performanceThresholds.default;
}Copy the example config template:
# Copy the example config from the package
cp node_modules/ultimate-playwright-performance/src/config/example-thresholds.js tests/config/performance-thresholds.jsUse it in your tests:
import { createPerformanceFixture } from 'ultimate-playwright-performance';
import { getThresholdsForEnvironment } from './config/performance-thresholds.js';
// Get thresholds based on environment variable
const environment = process.env.environment || 'test';
const thresholds = getThresholdsForEnvironment(environment);
const test = base.extend({
perf: createPerformanceFixture({ thresholds }),
});
// Or set via environment variable automatically:
// TEST_ENV=production npm test
// environment=staging npm testConfig File Structure
Your config file should export an object with this structure:
export const performanceThresholds = {
// Environment name
environmentName: {
// Core Web Vitals
LCP: { good: 2500, needsImprovement: 4000 }, // milliseconds
FCP: { good: 1800, needsImprovement: 3000 }, // milliseconds
FID: { good: 100, needsImprovement: 300 }, // milliseconds
CLS: { good: 0.1, needsImprovement: 0.25 }, // score (0-1)
TTFB: { good: 800, needsImprovement: 1800 }, // milliseconds
INP: { good: 200, needsImprovement: 500 }, // milliseconds
// Custom action timings
actionTiming: { good: 1000, needsImprovement: 3000 }, // milliseconds
},
};Notes:
good: Threshold for "good" performance (green status)needsImprovement: Threshold for "needs improvement" (yellow status)- Values above
needsImprovementare considered "poor" (red status) - You can override only the metrics you need; others will use defaults
Where to Put Your Config File
You can put your config file anywhere in your project. Common locations:
tests/config/performance-thresholds.js(recommended)config/performance-thresholds.jsplaywright.config/performance-thresholds.js
Just import it where you need it in your test files.
Features
- ✅ Core Web Vitals: LCP, FID, CLS, FCP, TTFB, INP
- ✅ Action Timing: Track custom user actions
- ✅ Resource Metrics: Track all page resources
- ✅ API Monitoring: Automatic API request/response tracking
- ✅ HTML Reports: Beautiful, detailed performance reports
- ✅ JSON Reports: Machine-readable reports for CI/CD
- ✅ CSV Reports: playwright-performance compatible format with test names, actions, and timings
- ✅ Configurable Thresholds: Custom SLA thresholds per metric
- ✅ Configurable Reporters: Choose which report formats to generate (CSV, HTML, JSON, or all)
- ✅ Consolidated Reports: Combine multiple test runs with CLI tool
License
ISC
