npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-crw

v0.2.0

Published

n8n community node for CRW — the open-source web scraper built for AI agents

Readme

n8n-nodes-crw

n8n community node for CRW — the open-source web scraper built for AI agents.

Scrape, crawl, and extract web data directly in your n8n workflows. Works with both self-hosted CRW and fastcrw.com cloud.

Installation

Via n8n UI

  1. Go to Settings > Community Nodes
  2. Click Install a community node
  3. Enter n8n-nodes-crw
  4. Click Install

Via Environment Variable

# Docker
docker run -e EXTRA_COMMUNITY_PACKAGES=n8n-nodes-crw n8nio/n8n

# docker-compose
environment:
  - EXTRA_COMMUNITY_PACKAGES=n8n-nodes-crw

Setup — Pick One

Option A: Cloud (fastcrw.com) — Quickest Start

Sign up at fastcrw.com and get 500 free credits. Then add credentials in n8n:

| Field | Value | |---|---| | Base URL | https://fastcrw.com/api (default) | | API Key | crw_live_... from fastcrw.com |

Option B: Self-hosted with binary (free, no limits)

curl -fsSL https://raw.githubusercontent.com/us/crw/main/install.sh | bash
crw  # starts on http://localhost:3000

| Field | Value | |---|---| | Base URL | http://localhost:3000 | | API Key | (leave empty) |

Option C: Self-hosted with Docker

docker run -d -p 3000:3000 ghcr.io/us/crw:latest

Same credentials as Option B.

Operations

Scrape

Scrape a single URL and return its content in one or more formats.

  • URL — The page to scrape
  • Output Formats — markdown, html, rawHtml, plainText, links, json
  • Only Main Content — Strip nav/footer/sidebar (default: true)
  • Additional Options — JS rendering, CSS selectors, XPath, custom headers, proxy, stealth mode, JSON schema for LLM extraction

Crawl

Crawl a website starting from a URL. Returns content from multiple pages.

  • URL — Starting URL
  • Max Depth — How many links deep to follow (default: 2)
  • Max Pages — Maximum pages to crawl (default: 100)
  • Wait for Completion — Poll until done, or return job ID immediately
  • Poll Interval / Max Wait Time — Control polling behavior

Each crawled page is returned as a separate n8n item for downstream processing.

Check Crawl Status

Check the status of a crawl job by its ID. Returns status, progress, and page data.

Cancel Crawl

Cancel a running crawl job by its ID.

Map

Discover all URLs on a website. Each discovered URL is returned as a separate n8n item.

  • URL — The site to map
  • Max Depth — How deep to discover links (default: 2)
  • Use Sitemap — Whether to use the site's sitemap (default: true)

AI Agent Tool

This node supports usableAsTool: true, so it can be used as a tool by n8n's AI Agent node. Set the environment variable:

N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true

Example Workflows

Scrape and save to Google Sheets

[CRW: Scrape] → [Google Sheets: Append Row]

Crawl site for RAG pipeline

[CRW: Crawl] → [Embeddings: OpenAI] → [Pinecone: Upsert]

Map and batch scrape

[CRW: Map] → [CRW: Scrape] → [OpenAI: Summarize] → [Slack: Post]

Links

License

MIT — this node wrapper is MIT licensed. The CRW server itself is AGPL-3.0.