npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

apify-test-tools

v0.5.5

Published

TBD

Readme

Apify Test Tools

Contributing link

Getting Started

  1. Install the package npm i -D apify-test-tools
    • because it uses annotate, vitest version to be at least 3.2.0
    • make sure that target and module in your tsconfig.json's compilerOptions are set to ES2022
  2. create test directories: mkdir -p test/platform/core
    • core (hourly) tests should go to test/platform/core
    • daily tests should go to test/platform
  3. setup github worklows TODO

File structure:

google-maps
├── actors
└── src
└── test
    ├── unit
    └── platform
        ├── core                  <- Core tests need to be inside core directory  
        │   └── core.test.ts
        ├── some.test.ts          <- Other tests can be defined anywhere inside platform directory
        └── some-other.test.ts

Github worklows

There should be 4 GH workflow files in .github/workflows.

platform-tests-core.yaml

name: Platform tests - Core

on:
    schedule:
        # Runs at the start of every hour
        - cron: '0 * * * *'
    workflow_dispatch:

jobs:
    platformTestsCore:
        uses: apify-store/github-actions-source/.github/workflows/platform-tests.yaml@new_master
        with:
            subtest: core
        secrets: inherit

platform-tests-daily.yaml

name: Platform tests - Daily

on:
    schedule:
        # Runs at 00:00 UTC every day
        - cron: '0 0 * * *'
    workflow_dispatch:

jobs:
    platformTestsDaily:
        uses: apify-store/github-actions-source/.github/workflows/platform-tests.yaml@new_master
        secrets: inherit

pr-build-devel-test.yaml

name: PR Test

on:
    pull_request:
        branches: [ master ]

jobs:
    buildDevelAndTest:
        uses: apify-store/github-actions-source/.github/workflows/pr-build-test.yaml@new_master
        secrets: inherit

release-latest.yaml

name: Release latest

on:
    push:
        branches: [ master ]

jobs:
    buildLatest:
        uses: apify-store/github-actions-source/.github/workflows/push-build-latest.yaml@new_master
        secrets: inherit

Differences in writing tests


Test structure

To run the tests concurrently, we had to start the run outside of it and then call await inside. This is now no longer needed and everything can be inside it aka testActor.

Before:

({ it, xit, run, expect, expectAsync, input, describe }: TestSpecInputs) => {
		describe('test', () => {
				{
		        const runPromise = run({ actorId, input })
						it('actor test 1', async () => {
						    const runResult = await runPromise;
						    
						    // your checks
						});
				}

				{
		        const runPromise = run({ actorId, input })
						it('actor test 2', async () => {
						    const runResult = await runPromise;
						    
						    // your checks
						});
				}		    
		});
})

After:

import { describe, testActor } from 'apify-test-tools';

describe('test', () => {
		testActor(actorId, 'actor test 1', async ({ expect, run }) => {
				const runResult = await run({ input })
				
				// your checks
		)};
		
		testActor(actorId, 'actor test 2', async ({ expect, run }) => {
				const runResult = await run({ input })

				// your checks
		)};
})

testActor extends expect with couple of custom matchers (e.g. toFinishWith) and provides run function call the correct actor, based on it’s first parameter


Validating basic run attributes

Before:

await expectAsync(runResult).toHaveStatus('SUCCEEDED');

await expectAsync(runResult).withLog((log) => {
    expect(log).not.toContain('ReferenceError');
    expect(log).not.toContain('TypeError');
});

await expectAsync(runResult).withStatistics((stats) => {
		expect(stats.requestsRetries)
		    .withContext(runResult.format('Request retries'))
		    .toBeLessThan(3);
		expect(stats.crawlerRuntimeMillis)
		    .withContext(runResult.format('Run time'))
				.toBeWithinRange(600, 600_000)
})
		
await expectAsync(runResult).withDataset(({ dataset }) => {
    expect(dataset.items?.length)
        .withContext(runResult.format('Dataset cleanItemCount'))
        .toBe(100);
})

After:

await expect(runResult).toFinishWith({
		datasetItemCount: 100,
})

You can also specify a range:

await expect(runResult).toFinishWith({
		datasetItemCount: { min: 80, max: 120 },
})

Here is full example of what you can validate with toFinishWith

await expect(runResult).toFinishWith({
		// These are default
    status: 'SUCCEEDED',
    duration: {
        min: 600, // 0.6 sec
        max: 600_000, // 10 min
    },
    failedRequests: 0,
    requestsRetries: { max: 3 },
    forbiddenLogs: [
        'ReferenceError',
        'TypeError',
    ],

		// only datasetItemCount is required 
		datasetItemCount: { min: 80, max: 120 },
    
    // optional
    chargedEventCounts: {
		    'actor-start': 1,
		    'place-scraped': 9,
		},
})

Custom validations

Before:

expect(place.title)
		.withContext(runResult.format(`London Eye's title`))
        .toEqual('lastminute.com London Eye')

After:

expect(place.title, `London Eye's title`).toEqual('lastminute.com London Eye')

Custom validation functions

You can now create your own functions wrapping a common validation logic in e.g. test/platform/utils.ts and import it in test files.

import { ExpectStatic } from 'apify-test-tools'

export const validateItem = (expect: ExpectStatic, item: any) {
		expect(item.title, 'Item title').toBeString();
}