npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

trawlr

v1.0.3

Published

<!--THIS FILE IS AUTOGENERATED, TO UPDATE ITS CONTENT, UPDATE THE README.template.md IN THE docs DIR--> <center>

Readme

A job scheduler and analysis tool for webscraping (and other) tasks.

Node.js Package

Datasources

Curently the following datasources are implemented:

"

  • facebook posts and reactions scrape facebook posts, comments and reactions (like, heart, etc)
  • gab (nazi-twitter) crawl posts for user
  • google dorking find interesting files and download them
  • json to csv convert json array into csv
  • mail sends mails and files - mostly usefull in pipelines
  • masscan udp based port scanner (requires docker)
  • motiondetection script to to motionanalysis in directory with videofiles
  • onionlist download tor-catalogue from onionlist.org
  • onions.danwin1210.de download tor-catalogue from danwin1210.de, and creates screenshots of each website in the result
  • tiktok get video metadata per hashtag, download them and analyse the text using easyOCR
  • url generic http scraper
  • urlscreenshotter scrapes comma separated list of urls and creates screenshot of each of them"

Create your own datasource

- copy template dir in ./jobs
- define fields in fields.js which are needed to start the job
- a job can output one or multiple files
- no directories should be used, please use archives
- use job_id.ext (eg job_id.json) as filename

Features

  • simple configuration of actions/datasources, also from 3rd party modules/repos
  • job monitoring and scheduling
  • schedule jobs
  • sqlite, csv and json browser
  • separation of datasets/artifacts (one archive per crawl)
  • scalable amount of workers (also on other machines)

Architecture

Frontend and API

  • GUI to create and schedule jobs
  • Displays pending, running and done jobs
  • Display csv and sqlite datasets

Worker(s)

  • Can be distributed (workers and c&c on different locations/servers)
  • Jobs are managed through json files (and can be distrubuted with an adapter like pouchDB)
  • Multithreaded

Install & run

Using NPM

npm i
npm run all