apex-scraper
v1.1.1
Published
A stealth web scraper for crawling websites and extracting clean text content with page and word limits.
Maintainers
Readme
Apex Scraper
- Version 1.1.1
Apex Scraper is a web scraping tool developed for websites, applications, and data collection workflows. It automatically discovers all reachable pages of a target website and returns their extracted content in a structured format.
Features
- Cloudflare, Anti-Bot, and Rate-Limit bypass techniques
- High efficiency with low RAM usage
- Semi–open source architecture
Limits & Plans
| Feature / Limit | Free Version | Pro Version | | --------------- | ------------ | -------------- | | Max Pages | 30 | 1050 | | Max Words | 128,000 | 4,480,000 | | Crawl Speed | Normal | 10× Faster |
Limits shown above are enforced automatically by the scraper.
Support
If you need help, have questions, or want to report bugs, you can join the official support server:
- Discord Support Server: https://discord.gg/5AFHy8vwZ3
Community support, announcements, and Pro updates are shared there.
Documentation
Parameters
The scrape function accepts a single options object.
Required
link(string) – Target website URL (must start withhttps://)
Optional
maxPage(number) – Maximum number of pages to crawlmaxWord(number) – Maximum number of words to extract
New in v1.1.0
screenshot(boolean) – Iftrue, captures a full-page screenshot (Base64)extractImages(boolean) – Iftrue, extracts all images (img tags, background images, lazy-loaded images)extractHtml(boolean) – Iftrue, extracts the raw HTML source of the pagewaitDuration(number) – Time to wait after page load before extraction (in milliseconds)waitSelector(string) – Waits until the specified CSS selector appears on the page
Usage
Apex Scraper is published as an ES Module (ESM) package. It works in both ESM and CommonJS projects.
ESM Usage (Recommended)
import { scrape } from 'apex-scraper';
const result = await scrape({
link: 'https://example.com',
maxPage: 5,
maxWord: 10000
});
console.log(result);CommonJS Usage
If your project uses CommonJS (require), you can still use Apex Scraper via dynamic import:
(async () => {
const { scrape } = await import('apex-scraper');
const result = await scrape({
link: 'https://example.com',
maxPage: 5,
maxWord: 10000
});
console.log(result);
})();Example Response
{
"duration": 17.691,
"pages": [
{
"url": "https://example.com",
"content": "Extracted page text...",
"wordCount": 336,
"screenshot": "<base64-string>",
"images": [
"https://example.com/image1.png",
"https://example.com/image2.jpg"
],
"html": "<html>...</html>"
}
]
}This response includes:
duration– Total scrape duration (seconds)pages– Array of scraped pagesurl– Page URLcontent– Extracted visible textwordCount– Word count for the pagescreenshot(optional) – Full-page screenshot (Base64)images(optional) – Extracted image URLs (img, background, lazy-load)html(optional) – Raw HTML source of the page
