apify-test-tools
v0.5.5
Published
TBD
Maintainers
Keywords
Readme
Apify Test Tools
Getting Started
- Install the package
npm i -D apify-test-tools- because it uses annotate,
vitestversion to be at least3.2.0 - make sure that
targetandmodulein yourtsconfig.json'scompilerOptionsare set toES2022
- because it uses annotate,
- create test directories:
mkdir -p test/platform/core- core (hourly) tests should go to
test/platform/core - daily tests should go to
test/platform
- core (hourly) tests should go to
- setup github worklows TODO
File structure:
google-maps
├── actors
└── src
└── test
├── unit
└── platform
├── core <- Core tests need to be inside core directory
│ └── core.test.ts
├── some.test.ts <- Other tests can be defined anywhere inside platform directory
└── some-other.test.tsGithub worklows
There should be 4 GH workflow files in .github/workflows.
platform-tests-core.yaml
name: Platform tests - Core
on:
schedule:
# Runs at the start of every hour
- cron: '0 * * * *'
workflow_dispatch:
jobs:
platformTestsCore:
uses: apify-store/github-actions-source/.github/workflows/platform-tests.yaml@new_master
with:
subtest: core
secrets: inheritplatform-tests-daily.yaml
name: Platform tests - Daily
on:
schedule:
# Runs at 00:00 UTC every day
- cron: '0 0 * * *'
workflow_dispatch:
jobs:
platformTestsDaily:
uses: apify-store/github-actions-source/.github/workflows/platform-tests.yaml@new_master
secrets: inheritpr-build-devel-test.yaml
name: PR Test
on:
pull_request:
branches: [ master ]
jobs:
buildDevelAndTest:
uses: apify-store/github-actions-source/.github/workflows/pr-build-test.yaml@new_master
secrets: inheritrelease-latest.yaml
name: Release latest
on:
push:
branches: [ master ]
jobs:
buildLatest:
uses: apify-store/github-actions-source/.github/workflows/push-build-latest.yaml@new_master
secrets: inheritDifferences in writing tests
Test structure
To run the tests concurrently, we had to start the run outside of it and then call await inside. This is now no longer needed and everything can be inside it aka testActor.
Before:
({ it, xit, run, expect, expectAsync, input, describe }: TestSpecInputs) => {
describe('test', () => {
{
const runPromise = run({ actorId, input })
it('actor test 1', async () => {
const runResult = await runPromise;
// your checks
});
}
{
const runPromise = run({ actorId, input })
it('actor test 2', async () => {
const runResult = await runPromise;
// your checks
});
}
});
})After:
import { describe, testActor } from 'apify-test-tools';
describe('test', () => {
testActor(actorId, 'actor test 1', async ({ expect, run }) => {
const runResult = await run({ input })
// your checks
)};
testActor(actorId, 'actor test 2', async ({ expect, run }) => {
const runResult = await run({ input })
// your checks
)};
})testActor extends expect with couple of custom matchers (e.g. toFinishWith) and provides run function call the correct actor, based on it’s first parameter
Validating basic run attributes
Before:
await expectAsync(runResult).toHaveStatus('SUCCEEDED');
await expectAsync(runResult).withLog((log) => {
expect(log).not.toContain('ReferenceError');
expect(log).not.toContain('TypeError');
});
await expectAsync(runResult).withStatistics((stats) => {
expect(stats.requestsRetries)
.withContext(runResult.format('Request retries'))
.toBeLessThan(3);
expect(stats.crawlerRuntimeMillis)
.withContext(runResult.format('Run time'))
.toBeWithinRange(600, 600_000)
})
await expectAsync(runResult).withDataset(({ dataset }) => {
expect(dataset.items?.length)
.withContext(runResult.format('Dataset cleanItemCount'))
.toBe(100);
})After:
await expect(runResult).toFinishWith({
datasetItemCount: 100,
})You can also specify a range:
await expect(runResult).toFinishWith({
datasetItemCount: { min: 80, max: 120 },
})Here is full example of what you can validate with toFinishWith
await expect(runResult).toFinishWith({
// These are default
status: 'SUCCEEDED',
duration: {
min: 600, // 0.6 sec
max: 600_000, // 10 min
},
failedRequests: 0,
requestsRetries: { max: 3 },
forbiddenLogs: [
'ReferenceError',
'TypeError',
],
// only datasetItemCount is required
datasetItemCount: { min: 80, max: 120 },
// optional
chargedEventCounts: {
'actor-start': 1,
'place-scraped': 9,
},
})Custom validations
Before:
expect(place.title)
.withContext(runResult.format(`London Eye's title`))
.toEqual('lastminute.com London Eye')After:
expect(place.title, `London Eye's title`).toEqual('lastminute.com London Eye')Custom validation functions
You can now create your own functions wrapping a common validation logic in e.g. test/platform/utils.ts and import it in test files.
import { ExpectStatic } from 'apify-test-tools'
export const validateItem = (expect: ExpectStatic, item: any) {
expect(item.title, 'Item title').toBeString();
}