npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@t6e/sitemap-generator

v0.2.0

Published

Easily create XML sitemaps for your website

Readme

Sitemap Generator

Easily create XML sitemaps for your website.

Generates a sitemap by crawling your site. Uses streams to efficiently write the sitemap to your drive and runs asynchronously to avoid blocking the thread. Creates multiple sitemaps if the threshold is reached. Respects robots.txt and meta tags.

About This Fork

This is a maintained and modernized fork of the original sitemap-generator by Lars Graubner. The original project has not been maintained since ca. 2021. This fork includes:

  • Migration to ESM (ES modules)
  • Migration to TypeScript with full type definitions
  • Replacement of deprecated simplecrawler with modern crawlee
  • Updated dependencies and security fixes
  • Modern development setup with Vitest, ESLint 9, and Prettier

All credit for the original concept and implementation goes to Lars Graubner. This fork maintains the same MIT license.

Install

This package is available on npm.

npm install @t6e/sitemap-generator

NB, this module runs only with Node.js (>=20.0.0) and is not meant to be used in the browser.

Usage

import SitemapGenerator from "@t6e/sitemap-generator";

// Create generator
const generator = SitemapGenerator("http://example.com", {
  stripQuerystring: false,
});

// Register event listeners
generator.on("done", () => {
  console.log("Sitemap created!");
});

generator.on("add", (url) => {
  console.log("Added:", url);
});

// Start the crawler
generator.start();

The crawler will fetch HTML pages and other file types parsed by Google. If present, the robots.txt file will be taken into account, with rules applied to each URL to consider if it should be added to the sitemap. The crawler will not fetch URLs from a page if the robots meta tag with the value nofollow is present, and will ignore a page completely if the noindex rule is present.

API

The generator offers straightforward methods to start the crawler and manage URLs.

start()

Starts the crawler asynchronously and writes the sitemap to disk.

await generator.start();

getCrawler()

Returns the underlying Crawlee crawler instance. This can be useful for advanced configuration.

const crawler = generator.getCrawler();

getSitemap()

Returns the sitemap instance (SitemapRotator). This can be useful to add static URLs to the sitemap:

const sitemap = generator.getSitemap();
sitemap.addURL("/my/static/url");

queueURL(url)

Add a URL to the crawler's queue. Useful to help the crawler fetch pages it can't find itself.

await generator.queueURL("http://example.com/hidden-page");

on(event, handler) / off(event, handler)

Register or unregister event listeners. See Events section below.

generator.on("add", (url) => console.log(url));

Options

Configure the sitemap generator by passing an options object as the second argument.

const generator = SitemapGenerator("http://example.com", {
  maxDepth: 0,
  filepath: "./sitemap.xml",
  maxEntriesPerFile: 50000,
  stripQuerystring: true,
  userAgent: "Node/SitemapGenerator",
  timeout: 30000,
});

filepath

Type: string
Default: ./sitemap.xml

Filepath for the new sitemap. If multiple sitemaps are created, _part1, _part2, etc. are appended to each filename. If you don't want to write files at all, you can pass null as the filepath.

maxEntriesPerFile

Type: number
Default: 50000

Google limits the maximum number of URLs in one sitemap to 50,000. If this limit is reached, the sitemap-generator creates multiple sitemaps and a sitemap index file.

stripQuerystring

Type: boolean
Default: true

Whether to strip query strings from URLs before adding them to the sitemap.

maxDepth

Type: number
Default: 0 (i.e., unlimited)

Maximum crawl depth. Set to 0 for unlimited depth, or specify a number to limit how many levels deep the crawler will go.

userAgent

Type: string
Default: Node/SitemapGenerator

The user agent string to use when crawling.

respectRobotsTxt

Type: boolean
Default: true

Whether to respect robots.txt rules.

ignoreInvalidSSL

Type: boolean
Default: false

Whether to ignore invalid SSL certificates.

timeout

Type: number
Default: 30000 (i.e., 30 seconds)

Request timeout in milliseconds.

ignoreAMP

Type: boolean
Default: true

Whether to ignore AMP (Accelerated Mobile Pages) versions of pages.

ignore(url)

Type: function
Default: null

A custom function to determine if a URL should be ignored. Return true to ignore the URL.

Example:

const generator = SitemapGenerator("http://example.com", {
  ignore: (url) => {
    // Ignore URLs containing "admin"
    return url.includes("/admin/");
  },
});

Events

The Sitemap Generator emits several events which can be listened to.

add

Triggered when the crawler successfully adds a URL to the sitemap.

generator.on("add", (url) => {
  console.log("Added:", url);
});

done

Triggered when the crawler finishes and the sitemap is created.

generator.on("done", () => {
  console.log("Sitemap generation complete!");
});

error

Triggered when there's an error fetching a URL. Passes an object with the HTTP status code, a message, and the URL.

generator.on("error", (error) => {
  console.log(error);
  // => { code: 404, message: 'Not Found', url: 'http://example.com/missing' }
});

ignore

Triggered when a URL is ignored (due to robots.txt, noindex meta tag, AMP detection, or custom ignore function). Passes the ignored URL.

generator.on("ignore", (url) => {
  console.log("Ignored:", url);
});