npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

multiple-urls-images-downloader

v1.0.5

Published

Given a list of URLs, this module will collect all the images on each URL and store them in separate PDF files.

Readme

Downloader

Badly named npm package.

Given a list of URLs, this module will collect all the images on each URL and store them in separate PDF files.

Single steps are:

  • asynchronously get HTML content for each URL
  • extract image URLs using the given locator function (using after-load)
  • asynchronously collect all images and merge them to a PDF

Use case

I wanted to collect images of houses from several real estate websites, as inspiration.

Libraries used

Install

npm -S i multiple-urls-images-downloader

How to use

NOTE: You always need to provide the Roboto fonts for the PDF generation (required by pdfmake). You can also provide additional custom fonts if you prefer.

const muid = require('multiple-urls-images-downloader');

const config = {
  // Mandatory list of URLs to inspect
  urls: ['url1', 'url2'],

  // Destination dir where to store the PDF files
  // Defaults to './documents'
  dir: './my_dir',

  // Defaults to the url without "/" or ":" or "."
  getTitle: url => url,

  // List of fonts
  fonts: {
    // Mandatory
    Roboto: {
      normal: './fonts/Roboto-Regular.ttf',
      bold: './fonts/Roboto-Medium.ttf',
      italics: './fonts/Roboto-Italic.ttf',
      bolditalics: './fonts/Roboto-MediumItalic.ttf',
    },
    // Optional
    customFont: {
      normal: 'path_to_font.tff',
      bold: 'path_to_font.tff',
      italics: 'path_to_font.tff',
      bolditalics: 'path_to_font.tff',
    },
  },

  // Mandatory
  // Locator function. muid will pass the html string and the $ cheerio object
  // ($ is provided by after-load)
  getImagesHref: (html, $) => {
    const images = [];
    $('img[src^="img/photos"]').each(function() {
      images.push($(this).attr('src'));
    });
    return images;
  },
};

muid(config);