npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

crawlee-one

v3.0.2

Published

Production-ready web scraping in a single function call. Built on Crawlee. Data transforms, caching, privacy compliance, and error tracking -- out of the box.

Readme

CrawleeOne

npm version npm downloads license TypeScript node GitHub stars

Production-ready web scraping. Out of the box.

CrawleeOne wraps Crawlee with everything production scrapers need -- data transforms, privacy compliance, error tracking, caching, and more -- in a single function call. Write the extraction logic. CrawleeOne handles the rest.

Works seamlessly with Apify, but the storage backend is pluggable -- you're not locked in.

npm install crawlee-one

Quick start

import { crawleeOne } from 'crawlee-one';

await crawleeOne({
  type: 'cheerio',
  routes: {
    mainPage: {
      match: /example\.com\/home/i,
      handler: async (ctx) => {
        const { $, pushData, pushRequests } = ctx;
        await pushData([{ title: $('h1').text() }], {
          privacyMask: { author: true },
        });
        await pushRequests([{ url: 'https://example.com/page/2' }]);
      },
    },
    otherPage: {
      match: (url, ctx) => url.startsWith('/') && ctx.$('.author').length > 0,
      handler: async (ctx) => {
        /* ... */
      },
    },
  },
});

That's it. No Actor.main() boilerplate, no manual router setup, no input wiring. CrawleeOne handles initialization, routing, input resolution, error handling, and teardown.

Why CrawleeOne?

One function. Full crawler.

Replace 100+ lines of Actor + Router + input boilerplate with a single crawleeOne() call.

Switch strategies, not code.

Go from cheerio to playwright by changing one prop. Your route handlers stay the same.

Reshape output without touching scraper code.

Users filter, transform, rename, and limit results via input config -- no code changes needed.

{
  "outputPickFields": ["name", "email"],
  "outputRenameFields": { "photo": "media.photos[0].url" },
  "outputMaxEntries": 500,
  "outputFilter": "(entry) => entry.rating > 4.0"
}

Fully typed out of the box.

Route handlers and context objects are typed based on your crawler type. TypeScript knows whether you have ctx.page or ctx.$ -- no extra setup.

Privacy compliance, built in.

Mark fields as personal data. CrawleeOne redacts them automatically when includePersonalData is off.

Incremental scraping.

Only process entries you haven't seen before. Built-in cache with KeyValueStore tracks what's been scraped across runs.

Errors captured, not lost.

Failed requests are saved to a dataset automatically. Plug in Sentry with one line, or implement your own telemetry.

Match routes by URL or content.

Regex, functions, or both. CrawleeOne auto-routes unlabeled requests to the right handler.

See all features

Before and after

With CrawleeOne:

await crawleeOne({
  type: 'cheerio',
  routes: {
    mainPage: {
      match: /example\.com\/home/i,
      handler: async (ctx) => {
        const data = [
          /* ... */
        ];
        await ctx.pushData(data, { privacyMask: { author: true } });
        await ctx.pushRequests([{ url: 'https://...' }]);
      },
    },
  },
});

Without CrawleeOne (vanilla Crawlee + Apify):

import { Actor } from 'apify';
import { CheerioCrawler, createCheerioRouter } from 'crawlee';

await Actor.main(async () => {
  const rawInput = await Actor.getInput();
  const input = {
    ...rawInput,
    ...(await fetchInput(rawInput.inputFromUrl)),
    ...(await runFunc(rawInput.inputFromFunc)),
  };

  const router = createCheerioRouter();

  router.addHandler('mainPage', async (ctx) => {
    await onBeforeHandler(ctx);
    const data = [
      /* ... */
    ];
    const finalData = await transformAndFilterData(data, ctx, input);
    const dataset = await Actor.openDataset(input.datasetId);
    await dataset.pushData(data);
    const reqs = ['https://...'].map((url) => ({ url }));
    const finalReqs = await transformAndFilterReqs(reqs, ctx, input);
    const queue = await Actor.openRequestQueue(input.requestQueueId);
    await queue.addRequests(finalReqs);
    await onAfterHandler(ctx);
  });

  router.addDefaultHandler(async (ctx) => {
    await onBeforeHandler(ctx);
    const url = ctx.request.loadedUrl || ctx.request.url;
    if (url.match(/example\.com\/home/i)) {
      const req = { url, userData: { label: 'mainPage' } };
      const finalReqs = await transformAndFilterReqs([req], ctx, input);
      const queue = await Actor.openRequestQueue(input.requestQueueId);
      await queue.addRequests(finalReqs);
    }
    await onAfterHandler(ctx);
  });

  const crawler = new CheerioCrawler({ ...input, requestHandler: router });
  crawler.run(['https://...']);
});

And that's far from everything -- the vanilla version still doesn't include data transforms, privacy masking, error tracking, caching, or input validation.

Common use cases

CrawleeOne scrapers support these out of the box, all configurable via input:

| Use case | What it does | | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------- | | Import URLs | Load URLs from databases, datasets, or custom functions. | | Data transforms | Rename, select, limit, and reshape output without code changes. | | Request filtering | Control what gets scraped to save time and money. | | Caching | Incremental scraping -- only process new entries. | | Privacy compliance | Redact personal data with a single toggle. | | Error capture | Centralized error tracking across scrapers. |

See all 12 use cases

Getting started

Installation

npm install crawlee-one

For scraper developers

  1. Read the getting started guide for a full walkthrough of crawleeOne() and its options.
  2. See example projects for real-world usage.
  3. Managing multiple crawlers in one project? Use codegen to generate typed helper functions from a config file.

For end users

Scrapers built with CrawleeOne are configurable by the end users (via Apify platform). Transform, filter, limit, and reshape scraped data and requests -- all through input fields, no code changes needed.

User guide

Apify actor input page

Documentation

| Document | Description | | ------------------------------------------------------------------------------------ | ----------------------------------------------------------- | | Getting started | Developer guide with full crawleeOne() options reference. | | Features | Complete feature catalog with code examples. | | Use cases | All 12 use cases with links to detailed guides. | | Input reference | All available input fields. | | Deploying to Apify | Step-by-step Apify deployment guide. | | Codegen | Generate typed crawler definitions from config. | | Integrations | Custom telemetry and storage backends. | | User guide | Guide for end users of CrawleeOne scrapers. | | API reference | Auto-generated TypeScript API docs. | | Crawlee & Apify overview | Background on how Crawlee and Apify work. |

Example projects

Contributing

Found a bug or have a feature request? Please open an issue.

When contributing code, please fork the repo and submit a pull request. See CONTRIBUTING.md for dev setup and guidelines.

Development

Want to build, test, or hack on CrawleeOne? The development guide covers prerequisites, all npm scripts, project structure, architecture, and testing strategy.

Supporting CrawleeOne

CrawleeOne is a labour of love. If you find it useful, you can support the project on Buy Me a Coffee.