npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-scraperapi-official

v1.0.0

Published

Official ScraperAPI nodes for n8n

Readme

ScraperAPI Official N8N Node

This is an n8n community node that lets you use ScraperAPI in your n8n workflows.

ScraperAPI is a solution to help you unlock and scrape any website, no matter the scale or difficulty. It handles proxies, browsers, and CAPTCHAs so you can focus on extracting the data you need.

n8n is a fair-code licensed workflow automation platform.

Table of Contents

Installation

Follow the installation guide in the n8n community nodes documentation.

From the npm registry.

  1. Go to Settings > Community Nodes.
  2. Select Install.
  3. Write n8n-nodes-scraperapi-official in the package name.
  4. Agree to the risks of using community nodes: select I understand the risks of installing unverified code from a public source.
  5. Select Install. n8n installs the node, and returns to the Community Nodes list in Settings.

Credentials

Getting Your API Key

  1. Sign up for a ScraperAPI account at ScraperAPI Dashboard
  2. Once logged in, navigate to your dashboard
  3. Copy your API key from the dashboard

Configuring Credentials in n8n

  1. In your n8n workflow, add a ScraperAPI node
  2. Click on the Credential to connect with field
  3. Click Create New Credential
  4. Enter your API key
  5. Click Save

The credentials will be automatically tested to ensure they work correctly.

For more information, see the ScraperAPI API Key Documentation.

Usage

The ScraperAPI node allows you to scrape any website by making a simple GET request. The node handles all the complexity of proxies, browser automation, and CAPTCHA solving.

Basic Usage

  1. Add a ScraperAPI node to your workflow
  2. Select the ScraperAPI resource, for example the API
  3. Enter the required parameters, for example the URL you want to scrape
  4. Configure any optional parameters (see Parameters below)
  5. Execute the workflow

The node returns a JSON object with the following structure:

{
  "resource": "api",
  "response": {
    "body": "...",
    "headers": {...},
    "statusCode": 200,
    "statusMessage": "OK"
  }
}

Resources

API

The API resource allows you to scrape any website using ScraperAPI's endpoint. It supports:

  • JavaScript rendering for dynamic content
  • Geo-targeting with country codes
  • Device-specific user agents (desktop/mobile)
  • Premium and ultra-premium proxy options
  • Automatic parsing of structured data for select websites
  • Multiple output formats (markdown, text, CSV, JSON)

Parameters

Required Parameters

  • URL: The target URL to scrape (e.g., https://example.com)

Optional Parameters

  • Autoparse: Whether to activate auto parsing for select websites. When enabled, ScraperAPI will automatically parse structured data from supported websites (JSON format by default).

  • Country Code: Two-letter ISO country code (e.g., US, GB, DE) for geo-targeted scraping.

  • Desktop Device: Whether to scrape the page as a desktop device. Note: Cannot be combined with Mobile Device.

  • Mobile Device: Whether to scrape the page as a mobile device. Note: Cannot be combined with Desktop Device.

  • Output Format: Output parsing format for the scraped content. Available options:

    • Markdown: Returns content in Markdown format.
    • Text: Returns content as plain text.
    • CSV: Returns content in CSV format. Note: Only available for autoparse websites.
    • JSON: Returns content in JSON format. Note: Only available for autoparse websites.

    If not specified, the content will be returned as HTML.

  • Render: Enable JavaScript rendering for pages that require JavaScript to load content. Set to true only when needed, as it increases processing time.

  • Premium: Use premium residential/mobile proxies for higher success rates. This option costs more but provides better reliability. Note: Cannot be combined with Ultra Premium.

  • Ultra Premium: Activate advanced bypass mechanisms for the most difficult websites. This is the most powerful option for sites with advanced anti-bot protection. Note: Cannot be combined with Premium.

Documentation

Version History

  • 0.1.1: Initial release with API resource support
  • 0.1.2: Usage added to Documentation
  • 0.1.3: Replace device_type options field with desktopDevice and mobileDevice boolean fields to support AI model auto-definition.

More ScraperAPI Integrations

MCP Server

ScraperAPI also provides an MCP (Model Context Protocol) server that enables AI models and agents to scrape websites.

Hosted MCP Server

ScraperAPI offers a hosted MCP server that you can use with n8n's MCP Client Tool.

Configuration Steps:

  1. Add an MCP Client Tool node to your workflow
  2. Configure the following settings:
    • Endpoint: https://mcp.scraperapi.com/mcp
    • Server Transport: HTTP Streamable
    • Authentication: Bearer Auth
    • Credential for Bearer Auth: Enter your ScraperAPI API key as a Bearer Token.
    • Tools to include: All (or select specific tools as needed)

Self-Hosted MCP Server

If you prefer to self-host the MCP server, you can find the implementation and setup instructions in the scraperapi-mcp repository.