npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@reportportal/testcafe-reporter-agent-js-testcafe

v5.0.1

Published

Agent for integration TestCafe with ReportPortal.

Downloads

1,308

Readme

@reportportal/agent-js-testcafe

Agent to integrate TestCafe with ReportPortal.

Installation

Install the agent in your project:

npm install --save-dev testcafe-reporter-agent-js-testcafe@npm:@reportportal/testcafe-reporter-agent-js-testcafe

Note: This package is namespaced. Therefore the following command can be used to install the reporter in a way that TestCafe can detect it. (Related issue in TestCafé repository)

Configuration

1. Create rp.json file with reportportal configuration:

{
    "token": "00000000-0000-0000-0000-000000000000",
    "endpoint": "https://your.reportportal.server/api/v1",
    "project": "YourReportPortalProjectName",
    "launch": "YourLauncherName",
    "attributes": [
        {
            "key": "YourKey",
            "value": "YourValue"
        },
        {
            "value": "YourValue"
        }
    ],
    "description": "Your launch description",
    "rerun": true,
    "rerunOf": "launchUuid of already existed launch",
    "mode": "DEFAULT",
    "skippedIssue": true,
    "debug": false
}

| Parameter | Description | | --------------------- | ----------------------------------------------------------------------------------------------------------------- | | token | User's Report Portal token from which you want to send requests. It can be found on the profile page of this user.| | endpoint | URL of your server. For example 'https://server:8080/api/v1'. | | launch | Name of launch at creation. | | project | The name of the project in which the launches will be created. | | rerun | Default: false. Enable rerun| | rerunOf | UUID of launch you want to rerun. If not specified, report portal will update the latest launch with the same name| | mode | Launch mode. Allowable values DEFAULT (by default) or DEBUG.| | skippedIssue | Default: true. ReportPortal provides feature to mark skipped tests as not 'To Investigate' items on WS side. Parameter could be equal boolean values: TRUE - skipped tests considered as issues and will be marked as 'To Investigate' on Report Portal. FALSE - skipped tests will not be marked as 'To Investigate' on application.| | debug | This flag allows seeing the logs of the client-javascript. Useful for debugging.|

2.1 Create .testcaferc.json TestCafe configuration file and add agent-js-testcafe to the reporter property

{
  "browsers": "chrome",
  "src": "./tests/**/*.js",
  "screenshots": {
    "path": "./screenshots/"
  },
  "reporter": [
    {
      "name": "list"
    },
    {
      "name": "agent-js-testcafe"
    }
  ],
  "takeScreenshotsOnFails": true
}

Run tests via testcafe command.

2.2 As an alternative if you are using API you can create testcafe.js file and use the reporter with provided config manually:

const createTestCafe = require('testcafe');
const { createReporter } = require('testcafe-reporter-agent-js-testcafe/build/createReporter');
const rpConfig = require('./rp.json');

async function start() {
  const testcafe = await createTestCafe('localhost');
  const runner = testcafe.createRunner();

  await runner.reporter(createReporter(rpConfig)).run(); // or just set 'agent-js-testcafe'

  await testcafe.close();
}

start();

Run tests via node testcafe.js.

Note: TestCafe options from .testcaferc.json can be overwritten programmatically in testcafe.js

Reporting

This reporter provides Reporting API to use it directly in tests to send some additional data to the report.

There are two ways to add additional data to the tests.

Using TestCafe meta method:

1. For fixture:

fixture`Getting Started`.page`http://devexpress.github.io/testcafe/example`
  .meta({
    description: 'Suite description',
    attributes: [{ key: 'page', value: 'testCafeExample' }, { value: 'sample' }],
  });

2. For test:

test('My first test', async (page) => {
  await page
    .typeText('#developer-name', 'John Smith')
    .click('#submit-button')
    .expect(Selector('#article-header').innerText)
    .eql('Thank you, John Smith!');
}).meta({
  description: 'Test form behavior',
  attributes: [{ key: 'test', value: 'form' }],
});

Only attributes and description properties supported.

Using ReportingApi:

To start using the ReportingApi in tests, just import it from '@reportportal/agent-js-testcafe':

const { ReportingApi } = require('testcafe-reporter-agent-js-testcafe/build/reportingApi');

Reporting API methods

The API provide methods for attaching data (logs, attributes, testCaseId, status).

addAttributes

Add attributes(tags) to the current test. Should be called inside of corresponding test or fixture. ReportingApi.addAttributes(attributes: Array<Attribute>); required: attributes Example:

test('should have the correct attributes', async (t) => {
  ReportingApi.addAttributes([
    {
      key: 'testKey',
      value: 'testValue',
    },
    {
      value: 'testValueTwo',
    },
  ]);
  await t.expect(true).eql(true);
});
setTestCaseId

Set test case id to the current test. Should be called inside of corresponding test or fixture. ReportingApi.setTestCaseId(id: string); required: id If testCaseId not specified, it will be generated automatically. Example:

test('should have the correct testCaseId', async (t) => {
  ReportingApi.setTestCaseId('itemTestCaseId');
  await t.expect(true).eql(true);
});
log

Send logs to report portal for the current test. Should be called inside of corresponding test or fixture. ReportingApi.log(level: LOG_LEVELS, message: string, file?: Attachment); required: level, message where level can be one of the following: TRACE, DEBUG, WARN, INFO, ERROR, FATAL Example:

test('should contain logs with attachments', async (page) => {
  const fileName = 'test.jpg';
  const fileContent = fs.readFileSync(path.resolve(__dirname, './attachments', fileName));
  const attachment = {
    name: fileName,
    type: 'image/jpg',
    content: fileContent.toString('base64'),
  };
  ReportingApi.log('INFO', 'info log with attachment', attachment);

  await page.expect(true).eql(true);
});
info, debug, warn, error, trace, fatal

Send logs with corresponding level to report portal for the current test or for provided by name. Should be called inside of corresponding test or fixture. ReportingApi.info(message: string, file?: Attachment); ReportingApi.debug(message: string, file?: Attachment); ReportingApi.warn(message: string, file?: Attachment); ReportingApi.error(message: string, file?: Attachment); ReportingApi.trace(message: string, file?: Attachment); ReportingApi.fatal(message: string, file?: Attachment); required: message Example:

test('should contain logs with attachments', async (page) => {
    ReportingApi.info('Log message');
    ReportingApi.debug('Log message');
    ReportingApi.warn('Log message');
    ReportingApi.error('Log message');
    ReportingApi.trace('Log message');
    ReportingApi.fatal('Log message');
    
    await page.expect(true).eql(true);
});
launchLog

Send logs to report portal for the current launch. Should be called inside of the any test or fixture. ReportingApi.launchLog(level: LOG_LEVELS, message: string, file?: Attachment); required: level, message where level can be one of the following: TRACE, DEBUG, WARN, INFO, ERROR, FATAL Example:

test('should contain logs with attachments', async (page) => {
  const fileName = 'test.jpg';
  const fileContent = fs.readFileSync(path.resolve(__dirname, './attachments', fileName));
  const attachment = {
    name: fileName,
    type: 'image/jpg',
    content: fileContent.toString('base64'),
  };
  ReportingApi.launchLog('INFO', 'info log with attachment', attachment);

  await page.expect(true).eql(true);
});
launchInfo, launchDebug, launchWarn, launchError, launchTrace, launchFatal

Send logs with corresponding level to report portal for the current launch. Should be called inside of the any test or fixture. ReportingApi.launchInfo(message: string, file?: Attachment); ReportingApi.launchDebug(message: string, file?: Attachment); ReportingApi.launchWarn(message: string, file?: Attachment); ReportingApi.launchError(message: string, file?: Attachment); ReportingApi.launchTrace(message: string, file?: Attachment); ReportingApi.launchFatal(message: string, file?: Attachment); required: message Example:

test('should contain logs with attachments', async (page) => {
    ReportingApi.launchInfo('Log message');
    ReportingApi.launchDebug('Log message');
    ReportingApi.launchWarn('Log message');
    ReportingApi.launchError('Log message');
    ReportingApi.launchTrace('Log message');
    ReportingApi.launchFatal('Log message');
    
    await page.expect(true).eql(true);
});
setStatus

Assign corresponding status to the current test item. ReportingApi.setStatus(status: string); required: status where status must be one of the following: passed, failed, stopped, skipped, interrupted, cancelled, info, warn Example:

test('should have status FAILED', async (page) => {
    ReportingApi.setStatus('failed');
    
    await page.expect(true).eql(true);
});
setStatusFailed, setStatusPassed, setStatusSkipped, setStatusStopped, setStatusInterrupted, setStatusCancelled, setStatusInfo, setStatusWarn

Assign corresponding status to the current test item. ReportingApi.setStatusFailed(); ReportingApi.setStatusPassed(); ReportingApi.setStatusSkipped(); ReportingApi.setStatusStopped(); ReportingApi.setStatusInterrupted(); ReportingApi.setStatusCancelled(); ReportingApi.setStatusInfo(); ReportingApi.setStatusWarn(); Example:

test('should call ReportingApi to set statuses', async (page) => {
    ReportingAPI.setStatusFailed();
    ReportingAPI.setStatusPassed();
    ReportingAPI.setStatusSkipped();
    ReportingAPI.setStatusStopped();
    ReportingAPI.setStatusInterrupted();
    ReportingAPI.setStatusCancelled();
    ReportingAPI.setStatusInfo();
    ReportingAPI.setStatusWarn();
});
setLaunchStatus

Assign corresponding status to the current launch. ReportingApi.setLaunchStatus(status: string); required: status where status must be one of the following: passed, failed, stopped, skipped, interrupted, cancelled, info, warn Example:

test('launch should have status FAILED', async (page) => {
    ReportingApi.setLaunchStatus('failed');
    
    await page.expect(true).eql(true);
});
setLaunchStatusFailed, setLaunchStatusPassed, setLaunchStatusSkipped, setLaunchStatusStopped, setLaunchStatusInterrupted, setLaunchStatusCancelled, setLaunchStatusInfo, setLaunchStatusWarn

Assign corresponding status to the current test item. ReportingApi.setLaunchStatusFailed(); ReportingApi.setLaunchStatusPassed(); ReportingApi.setLaunchStatusSkipped(); ReportingApi.setLaunchStatusStopped(); ReportingApi.setLaunchStatusInterrupted(); ReportingApi.setLaunchStatusCancelled(); ReportingApi.setLaunchStatusInfo(); ReportingApi.setLaunchStatusWarn(); Example:

test('should call ReportingApi to set launch statuses', async (page) => {
    ReportingAPI.setLaunchStatusFailed();
    ReportingAPI.setLaunchStatusPassed();
    ReportingAPI.setLaunchStatusSkipped();
    ReportingAPI.setLaunchStatusStopped();
    ReportingAPI.setLaunchStatusInterrupted();
    ReportingAPI.setLaunchStatusCancelled();
    ReportingAPI.setLaunchStatusInfo();
    ReportingAPI.setLaunchStatusWarn();
});

Integration with Sauce Labs

To integrate with Sauce Labs just add attributes for the test case:

[{
 "key": "SLID",
 "value": "# of the job in Sauce Labs"
}, {
 "key": "SLDC",
 "value": "EU (your job region in Sauce Labs)"
}]