npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

html-scrapper

v0.0.3

Published

A general purpose scrapper written using cheerio, request, async etc. It can scrap data from html pages based on some schema defnition. A schema denition includes a css selector and a function that accepts this element and returns the data. It is simple

Downloads

12

Readme

html-scrapper

A general purpose web scrapper written in server side JavaScript. It is written for ease of use.

#Example Example of scrapping github's explore page.


var scrapper = require('../');

var Source = scrapper.Source;
var Extractor = scrapper.Extractor;
var Fn = scrapper.Fn;


var github = new Source('get', 'https://github.com/explore' );
var dataSchema = {
    trendingRepos:[ {
        $rule: '#explore-trending .explore-collection > ul > li',
        name: '.repo-name',
        forks: ':nth-child(1)',
        stars: {
            $rule: ':nth-child(2)',
            $fn: Fn.asInt
        }
    }]
};

var extractor = new Extractor( dataSchema );
github.read(function(err, res ){
    var data = extractor.extract( res.body );
    console.log( data );
});

/*
Returns

{ trendingRepos: 
   [ { name: 'calmh/syncthing', forks: '77', stars: 2019 },
     { name: 'quilljs/quill', forks: '47', stars: 1312 },
     { name: 'filamentgroup/tablesaw', forks: '31', stars: 1128 },
     { name: 'atom/atom', forks: '142', stars: 1035 },
     { name: 'dennis714/RE-for-beginners', forks: '67', stars: 1072 },
     { name: 'mdo/wtf-forms', forks: '43', stars: 912 } ] }

*/

#Usage

Collection Fn

Fn contains some usefull data extraction functions that can be used as $fn.

Available functions:

  1. text
    • trimmed text content
  2. link
    • href attribute
  3. data
    • data-name=value is returned as {name: value}
  4. classes
    • class attribute
  5. asInt
    • text is parsed as integer. all comas are removed
  6. asFloat
    • same as asInt but casts to Float.

Class Browser

A simple Browser class implementation. It uses request module for http requests and stores session data in its instance. Only get method is implemented right now.

Class Crawler

A simple web Crawler class. It uses the following libraries

  • job-manage: It is the backbone of Crawler. JobManager is a asynchronous queue manager library. It is used to automatically collect pageUrls, scrap each pages, manage concurrency and to start, pause and resume the crawling.
  • BufferedSink: Used to write the scraped data.

It need following data to be passed to its constructor.

  • loadPageList: A function with signature function( pageLoaderData, cb ). It is used to collect urls of pages that need to be scraped.
    • pageLoaderData: A normal Object used to store any arbitrary data by this function. if bundle.$endReached is set true, then it will stop furthon invocation of loadPageList function.
    • cb: A callback function with signaturefunction(err, [ pageData, ... ])
  • scrapePage: A function function( pageData, cb).
    • pageData: single item from the collection passed to callback function of loadPageList function.
    • cb: callback function.
  • bs: A BufferedSink instance used to write the data to output medium. See examples/blogspot for a simple implementation that appends data to a json file
  • pageListFilter: An optional function. It it is present, all output from loadPageList function is passed through this function. Even if filtered output is empty we need not to worry about that, Crawler will manage that by repeated calling of loadPageList function until it gets some data or loadPageList function returns an empty result.
  • onError: a function function(err) called upon error. It will not stop the Crawler.
  • onFinish: a function called upon the completion of whole tasks.
  • concurrency: no.of parallel requests to processed during scraping.

See the examples/blogspot/ for an example crawler that scraps whole posts from a blogspot blog and dumps into a json file.

Class Source

Depreciated. Browser class which is more simple ( and of course feature less ) instead. We should extend Browser to meet our custom use

Class Extractor

Documentation is not yet complete see source code for undocumented features..