webscrapefusion
v0.2.0
Published
A robust and resilient web scraping library for Node.js, with a unified API for HTTP and Browser engines.
Maintainers
Readme
WebScrapeFusion
WebScrapeFusion is a powerful and resilient web scraping library for Node.js, designed to simplify data extraction from both static and dynamic websites.
It unifies the speed of an HTTP client with the robustness of a headless browser, all under a single, elegant API.
Built for real-world scenarios, WebScrapeFusion includes out-of-the-box support for automatic pagination, retries, delays, and nested schema extraction, making your scrapers more powerful, maintainable, and production-ready.
✨ Features
- ⚡ Unified API – Write once, switch between a fast
httpengine (axios + cheerio) and a robustbrowserengine (playwright) with a single option. - 📄 Automatic Pagination – Scrape multi-page data with
.paginate(). Just provide the "next" button selector. - 🔄 Resilience Built-in – Automatic retries, randomized delays, and human-like behavior reduce the chance of blocking or bans.
- 🧩 Nested Schema Extraction – Extract clean, structured JSON from messy HTML using simple schema definitions.
- 📊 Configurable Logging – Choose between console logging, no logging, or plug in your own logger (Winston, Pino, etc.).
- 🌐 SPA & Modern Web Ready – Render React, Vue, Angular and other JavaScript-heavy pages with Playwright.
- 🛡️ Custom User Agents – Rotate or define specific user agents to mimic browsers.
📦 Installation
npm install webscrapefusion⚠️ Note: WebScrapeFusion uses Playwright for its browser engine. On the first install, Playwright will download the required browser binaries.
🚀 Quick Start
Scrape the first quote from a static website.
const WebScrapeFusion = require('webscrapefusion');
(async () => {
const scraper = new WebScrapeFusion();
const url = 'https://quotes.toscrape.com/';
const schema = {
text: '.quote .text',
author: '.quote .author',
};
try {
const data = await scraper.extract(url, schema);
console.log(data);
// {
// text: '“The world as we have created it is a process of our thinking...”',
// author: 'Albert Einstein'
// }
} catch (error) {
console.error('Scraping failed:', error);
}
})();📖 API Guide
new WebScrapeFusion(options)
Initialize a new scraper instance.
Options:
motor('http'|'browser') – Scraping engine. Default:'http'.retries(number) – Retries for failed requests. Default:1.delay({ min, max }) – Random delay in ms between requests.logger(boolean|object) – Logging control (false,true, or custom logger).userAgent(string) – Custom User-Agent. Default: Random popular browser agent.headless(boolean) – (Browser only) Run in headless mode. Default:true.navigationTimeout(number) – (Browser only) Navigation timeout in ms. Default:60000.
scraper.extract(url, schema)
Extracts structured data from a single URL.
url(string) – Page to scrape.schema(object) – Schema definition.
scraper.paginate(options)
Scrapes multiple pages following a "next" button.
Options:
startUrl(string) – First page to start scraping.schema(object) – Schema applied to each page.nextButtonSelector(string) – CSS selector for "next" page.maxPages(number) – Limit of pages. Default:Infinity.
scraper.close()
Closes the scraper.
Essential for browser mode to terminate the Playwright process.
💡 Advanced Examples
1. Automatic Pagination
const scraper = new WebScrapeFusion({ motor: 'http' });
const quotes = await scraper.paginate({
startUrl: 'http://quotes.toscrape.com/',
nextButtonSelector: 'li.next a',
maxPages: 3,
schema: {
quotes: {
selector: '.quote',
multiple: true,
schema: {
text: '.text',
author: '.author',
},
},
},
});
console.log(`Extracted ${quotes.length} quotes.`);
// Extracted 30 quotes.2. Nested Schema Extraction
const scraper = new WebScrapeFusion();
const url = 'https://books.toscrape.com/';
const schema = {
books: {
selector: 'article.product_pod',
multiple: true,
schema: {
title: 'h3 a',
price: '.price_color',
rating: {
selector: 'p.star-rating',
attribute: 'class',
transform: cls => cls.replace('star-rating ', ''),
},
},
},
};
const data = await scraper.extract(url, schema);
console.log(data.books);3. Resilient Scraper with Browser
const scraper = new WebScrapeFusion({
motor: 'browser',
retries: 3,
delay: { min: 2000, max: 5000 },
logger: false,
});
const url = 'http://quotes.toscrape.com/js/';
try {
const data = await scraper.extract(url, { /* schema */ });
console.log('Data extracted successfully!');
} catch (error) {
if (error.name === 'NavigationError') {
console.error(`Navigation failed: ${error.url}`);
} else {
console.error('Unexpected error:', error);
}
} finally {
await scraper.close();
}🤝 Contributing
Contributions are welcome!
- Fork the repo
- Create a branch:
git checkout -b my-feature - Make your changes + add tests
- Run tests:
npm test - Submit a pull request
📜 License
This project is licensed under the MIT License.
See the LICENSE file for details.
Created by Fernando Martini (fmartini23) 🚀
