npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@just-every/crawl

v1.0.8

Published

Fast, token-efficient web content extraction - fetch web pages and convert to clean Markdown

Downloads

1,791

Readme

@just-every/crawl

Fast, token-efficient web content extraction - fetch web pages and convert to clean Markdown. Perfect for LLMs to read data from web pages quickly and efficiently.

Features

  • 🚀 Fast HTML to Markdown conversion using Mozilla's Readability
  • 📄 Clean, readable markdown output optimized for LLMs
  • 🤖 Respects robots.txt (configurable)
  • 💾 Built-in SHA-256 based caching with graceful fallbacks
  • 🔄 Multi-page crawling with intelligent link extraction
  • ⚡ Concurrent fetching with configurable limits (default: 3)
  • 🔗 Automatic relative to absolute URL conversion in markdown
  • 🛡️ Robust error handling and timeout management

Installation

npm install @just-every/crawl

Usage

Command Line

# Fetch a single page
npx web-crawl https://example.com

# Crawl multiple pages (specify max pages to crawl)
npx web-crawl https://example.com --pages 3 --concurrency 5

# Output as JSON
npx web-crawl https://example.com --output json

# Custom user agent and timeout
npx web-crawl https://example.com --user-agent "MyBot/1.0" --timeout 60000

# Disable robots.txt checking
npx web-crawl https://example.com --no-robots

Programmatic API

import { fetch, fetchMarkdown } from '@just-every/crawl';

// Fetch and convert a single URL to markdown
const markdown = await fetchMarkdown('https://example.com');
console.log(markdown);

// Fetch multiple pages with options
const results = await fetch('https://example.com', {
    pages: 3,              // Maximum pages to crawl (default: 1)
    maxConcurrency: 5,     // Max concurrent requests (default: 3)
    respectRobots: true,   // Respect robots.txt (default: true)
    sameOriginOnly: true,  // Only crawl same origin (default: true)
    userAgent: 'MyBot/1.0',
    cacheDir: '.cache',
    timeout: 30000         // Request timeout in ms (default: 30000)
});

// Process results
results.forEach(result => {
    if (result.error) {
        console.error(`Error for ${result.url}: ${result.error}`);
    } else {
        console.log(`# ${result.title}`);
        console.log(result.markdown);
    }
});

CLI Options

Options:
  -p, --pages <number>      Maximum pages to crawl (default: 1)
  -c, --concurrency <n>     Max concurrent requests (default: 3)
  --no-robots               Ignore robots.txt
  --all-origins             Allow cross-origin crawling
  -u, --user-agent <agent>  Custom user agent
  --cache-dir <path>        Cache directory (default: ".cache")
  -t, --timeout <ms>        Request timeout in milliseconds (default: 30000)
  -o, --output <format>     Output format: json, markdown, or both (default: "markdown")
  -h, --help                Display help

API Reference

fetch(url, options)

Fetches a URL and returns an array of crawl results.

Parameters:

  • url (string): The URL to fetch
  • options (CrawlOptions): Optional crawling configuration

Returns: Promise<CrawlResult[]>

fetchMarkdown(url, options)

Fetches a single URL and returns only the markdown content.

Parameters:

  • url (string): The URL to fetch
  • options (CrawlOptions): Optional crawling configuration

Returns: Promise<string>

Types

interface CrawlOptions {
    pages?: number;           // Maximum pages to crawl (default: 1)
    maxConcurrency?: number;  // Max concurrent requests (default: 3)
    respectRobots?: boolean;  // Respect robots.txt (default: true)
    sameOriginOnly?: boolean; // Only crawl same origin (default: true)
    userAgent?: string;       // Custom user agent
    cacheDir?: string;        // Cache directory (default: ".cache")
    timeout?: number;         // Request timeout in ms (default: 30000)
}

interface CrawlResult {
    url: string;             // The URL that was crawled
    markdown: string;        // Converted markdown content
    title?: string;          // Page title
    links?: string[];        // Links extracted from markdown content
    error?: string;          // Error message if failed
}

License

MIT