npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@adobe/spacecat-shared-scrape-client

v2.6.1

Published

Shared modules of the Spacecat Services - Scrape Client

Downloads

3,342

Readme

Spacecat Shared - Scrape Client

A JavaScript client for managing web scraping jobs, part of the SpaceCat Shared library. The ScrapeClient provides a comprehensive interface for creating, monitoring, and retrieving results from web scraping operations without needing to access the SpaceCat API service directly.

Installation

Install the package using npm:

npm install @adobe/spacecat-shared-scrape-client

Features

  • Create Scrape Jobs: Submit URLs for web scraping with customizable options
  • Job Monitoring: Track job status and progress
  • Result Retrieval: Get detailed results for completed scraping jobs
  • Date Range Queries: Find jobs within specific time periods
  • Base URL Filtering: Search jobs by domain or base URL
  • Processing Type Support: Different scraping strategies and configurations
  • Custom Headers: Add custom HTTP headers for scraping requests
  • Error Handling: Comprehensive validation and error reporting

Usage

Creating an Instance

Method 1: Direct Constructor

import { ScrapeClient } from '@adobe/spacecat-shared-scrape-client';

const config = {
  dataAccess: dataAccessClient,    // Data access layer
  sqs: sqsClient,                  // SQS client for job queuing
  env: environmentVariables,       // Environment configuration
  log: logger                      // Logging interface
};

const client = new ScrapeClient(config);

Method 2: From Helix Universal Context

import { ScrapeClient } from '@adobe/spacecat-shared-scrape-client';

const context = {
  dataAccess: context.dataAccess,
  sqs: context.sqs,
  env: context.env,
  log: context.log
};

const client = ScrapeClient.createFrom(context);

Creating a Scrape Job

const jobData = {
  urls: ['https://example.com/page1', 'https://example.com/page2'],
  options: {},
  customHeaders: {
    // Custom HTTP headers (optional)
    'Authorization': 'Bearer token',
    'X-Custom-Header': 'value'
  },
  processingType: 'default', // Optional, defaults to 'DEFAULT'
  maxScrapeAge: 6, // Optional, used to avoid re-scraping recently scraped URLs (hours) 0 means always scrape
  auditData: {} // Optional, this is used for step audits
};

try {
  const job = await client.createScrapeJob(jobData);
  console.log('Job created:', job.id);
  console.log('Job status:', job.status);
} catch (error) {
  console.error('Failed to create job:', error.message);
}

Checking Job Status

const jobId = 'your-job-id';

try {
  const jobStatus = await client.getScrapeJobStatus(jobId);
  if (jobStatus) {
    console.log('Job Status:', jobStatus.status);
    console.log('URL Count:', jobStatus.urlCount);
    console.log('Success Count:', jobStatus.successCount);
    console.log('Failed Count:', jobStatus.failedCount);
    console.log('Duration:', jobStatus.duration);
  } else {
    console.log('Job not found');
  }
} catch (error) {
  console.error('Failed to get job status:', error.message);
}

Getting Job Results

const jobId = 'your-job-id';

try {
  const results = await client.getScrapeJobUrlResults(jobId);
  if (results) {
    results.forEach(result => {
      console.log(`URL: ${result.url}`);
      console.log(`Status: ${result.status}`);
      console.log(`Reason: ${result.reason}`);
      console.log(`Path: ${result.path}`);
    });
  } else {
    console.log('Job not found');
  }
} catch (error) {
  console.error('Failed to get job results:', error.message);
}

Getting Successful Scrape Paths

const jobId = 'your-job-id';
try {
  const paths = await client.getScrapeResultPaths(jobId);
  if (paths === null) {
    console.log('Job not found');
  } else if (paths.size === 0) {
    console.log('No successful paths found for this job');
  } else {
    console.log(`Found ${paths.size} successful paths for job ${jobId}`);
    for (const [url, path] of paths) {
      console.log(`URL: ${url} -> Path: ${path}`);
    }
  }
} catch (error) {
  console.error('Failed to get successful paths:', error.message);
}

Finding Jobs by Date Range

const startDate = '2024-01-01T00:00:00Z';
const endDate = '2024-01-31T23:59:59Z';

try {
  const jobs = await client.getScrapeJobsByDateRange(startDate, endDate);
  console.log(`Found ${jobs.length} jobs in date range`);
  jobs.forEach(job => {
    console.log(`Job ${job.id}: ${job.status} - ${job.baseURL}`);
  });
} catch (error) {
  console.error('Failed to get jobs by date range:', error.message);
}

Finding Jobs by Base URL

const baseURL = 'https://example.com';

try {
  // Get all jobs for a base URL
  const allJobs = await client.getScrapeJobsByBaseURL(baseURL);
  console.log(`Found ${allJobs.length} jobs for ${baseURL}`);

  // Get jobs for a specific processing type
  const specificJobs = await client.getScrapeJobsByBaseURL(baseURL, 'form');
  console.log(`Found ${specificJobs.length} jobs with custom processing`);
} catch (error) {
  console.error('Failed to get jobs by base URL:', error.message);
}

Job Response Format

When you retrieve a scrape job, it returns an object with the following structure:

{
  id: "job-id",
  baseURL: "https://example.com",
  processingType: "default",
  options: { /* scraping options */ },
  startedAt: "2024-01-01T10:00:00Z",
  endedAt: "2024-01-01T10:05:00Z",
  duration: 300000, // milliseconds
  status: "COMPLETE",
  urlCount: 10,
  successCount: 8,
  failedCount: 2,
  redirectCount: 0,
  customHeaders: { /* custom headers used */ }
}

URL Results Format

When you retrieve job results, each URL result has this structure:

{
  url: "https://example.com/page",
  status: "SUCCESS",
  reason: "in case there was an error, you will this this here",
  path: "/s3/path/to/scraped/content"
}

Path Results Format

When you retrieve successful scrape paths using getScrapeResultPaths(), the response is a JavaScript Map object that maps URLs to their corresponding result file paths. Only URLs with COMPLETE status are included:

Map(2) {
  'https://example.com/page1' => 'path/to/result1',
  'https://example.com/page2' => 'path/to/result2'
}

Configuration

The client uses the SCRAPE_JOB_CONFIGURATION environment variable for default settings:

// Example configuration
{
  "maxUrlsPerJob": 5,
  "options": {
    "enableJavascript": true,
    "hideConsentBanner": true,
  }
}

Testing

To run tests:

npm run test

Linting

Lint your code:

npm run lint

Fix linting issues:

npm run lint:fix

Cleaning

To remove node_modules and package-lock.json:

npm run clean

Dependencies

  • @adobe/helix-universal: Universal context support
  • @adobe/spacecat-shared-data-access: Data access layer
  • @adobe/spacecat-shared-utils: Utility functions

Additional Information

ScrapeClient Workflow Overview

When a new scrape job is created, the client performs the following steps:

  1. Creates a new job entry in the database with status PENDING.
  2. Splits the provided URLs into batches based on the maxUrlsPerMessage configuration (this is limited due to SQS message size constraints).
  3. For each batch, it creates a message in the SQS queue to the scrape-job-manager.

In the scrape-job-manager the following steps are performed:

  1. All existing ScrapeURLs are fetched for the base URL to avoid re-scraping recently scraped URLs (based on the maxScrapeAge parameter).
  2. For all URLs a new ScrapeURL entry is created with status PENDING.
  3. Each URL in the batch is checked against existing ScrapeURLs.
    • Already scraped URLs (with status 'COMPLETE' or 'PENDING') are marked to be skipped with the ID of the existing ScrapeURL and the isOriginal flag set to false.
    • URLs that need to be scraped are marked with the isOriginal flag set to true. (The isOriginal flag is used to avoid the sliding window problem when re-scraping URLs.)
    • All URLs are numbered with based on their position in the original list to be able to track the job progress.
  4. For each URL, a message is created in the SQS queue to the content-scraper.

In the content-scraper the following steps are performed:

  1. The content-scraper checks if an incoming URL message is marked to be skipped. If so, it just sends a message to the content-processor.
  2. If the URL is not marked to be skipped, the content-scraper scrapes the URL.
  3. The content-scraper creates a message in the SQS queue to the content-processor with the result of the scraping operation.

in the content-processor the following steps are performed:

  1. The content-processor processes the incoming message from the content-scraper.
  2. If the URL was skipped, it fetches the existing ScrapeURL entry and updates the new ScrapeURL entry with the same path and status.
  3. If the URL was scraped, it updates the ScrapeURL entry with the result of the scraping operation (status, path, reason).
  4. The content-processor updates the ScrapeJob entry with the new counts (success, failed, redirect).
  5. If all URLs of a job are processed (based on their number and the totalUrlCount of the job), it:
    • performs a cleanup step to set all PENDING URLs to FAILED that were not processed (e.g. due to timeouts).
    • updates the counts of the job again.
    • sets the job status to COMPLETE and sets the endedAt timestamp.
    • Optionally, it can send a SQS message (e.g. to trigger the next audit step).