npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

scrappey

v1.2.4

Published

Introducing Scrappey, your comprehensive website scraping solution provided by Scrappey.com. With Scrappey's powerful and user-friendly API, you can effortlessly retrieve data from websites, including those protected by Cloudflare. Join Scrappey today and

Downloads

38

Maintainers

demonmartindemonmartin

Keywords

cloudflare anti bot bypasscloudflare solverscraperscrapingcloudflare scrapercloudflare turnstile solverturnstile solverdata extractionweb scrapingwebsite scrapingdata scrapingscraping toolAPI scrapingscraping solutionweb data extractionwebsite data extractionweb scraping librarywebsite scraping librarycloudflare bypassscraping APIweb scraping APIcloudflare protectiondata scraping toolscraping servicecloudflare challenge solverweb scraping solutionweb scraping servicecloudflare scrapingcloudflare bot protectionscraping frameworkscraping librarycloudflare bypass toolcloudflare anti-botcloudflare protection bypasscloudflare solver toolweb scraping tooldata extraction librarywebsite scraping toolcloudflare turnstile bypasscloudflare anti-bot solverturnstile solver toolcloudflare scraping solutionwebsite data scrapercloudflare challenge bypassweb scraping frameworkcloudflare challenge solver toolweb data scrapingdata scraperscraping data from websitesSEOdata miningdata harvestingdata crawlingweb scraping softwarewebsite scraping toolweb scraping frameworkdata extraction toolweb data scraperdata scraping servicescraping automationscraping tutorialscraping codescraping techniquesscraping best practicesscraping scriptsscraping examplesscraping challengesscraping tricksscraping tipsscraping strategiesscraping methodscloudflare protection bypasscloudflare security bypassweb scraping Pythonweb scraping JavaScriptweb scraping PHPweb scraping Rubyweb scraping Javaweb scraping C#web scraping Node.jsweb scraping BeautifulSoupweb scraping Seleniumweb scraping Scrapyweb scraping Puppeteerweb scraping requestsweb scraping headless browserweb scraping dynamic contentweb scraping AJAXweb scraping paginationweb scraping authenticationweb scraping cookiesweb scraping session managementweb scraping data parsingweb scraping data cleaningweb scraping data analysisweb scraping data visualizationweb scraping legal issuesweb scraping ethicsweb scraping complianceweb scraping regulationsweb scraping IP blockingweb scraping anti-scraping measuresweb scraping proxyweb scraping CAPTCHA solvingweb scraping IP rotationweb scraping rate limitingweb scraping data privacyweb scraping consentweb scraping terms of serviceweb scraping robots.txtweb scraping data storageweb scraping database integrationweb scraping data integrationweb scraping API integrationweb scraping data exportweb scraping data processingweb scraping data transformationweb scraping data enrichmentweb scraping data validationweb scraping error handlingweb scraping scalabilityweb scraping performance optimizationweb scraping distributed scrapingweb scraping cloud-based scrapingweb scraping serverless scrapingakamaidatadomeperimetexshapekasadaqueue-itincapsula

Readme

🤖 Scrappey Wrapper - Data Extraction Made Easy

Introducing Scrappey, your comprehensive website scraping solution provided by Scrappey.com. With Scrappey's powerful and user-friendly API, you can effortlessly retrieve data from websites, including those protected by Cloudflare. Join Scrappey today and revolutionize your data extraction process. 🚀

Disclaimer: Please ensure that your web scraping activities comply with the website's terms of service and legal regulations. Scrappey is not responsible for any misuse or unethical use of the library. Use it responsibly and respect the website's policies.

Website: https://scrappey.com/ GitHub: https://github.com/

Topics

Installation

Use npm to install the Scrappey library. 💻

npm install scrappey

Usage

Require the Scrappey library in your code. 📦

const Scrappey = require('scrappey');

Create an instance of Scrappey by providing your Scrappey API key. 🔑

const apiKey = 'YOUR_API_KEY';
const scrappey = new Scrappey(apiKey);

Example

Here's an example of how to use Scrappey. 🚀

const Scrappey = require('scrappey');

// Replace 'YOUR_API_KEY' with your Scrappey API key
const apiKey = 'YOUR_API_KEY';

// Create an instance of Scrappey
const scrappey = new Scrappey(apiKey);

function getQueryString(object) {
    const queryString = Object.keys(object)
        .map(key => encodeURIComponent(key) + '=' + encodeURIComponent(object[key]))
        .join('&');
    return queryString;
}

async function run() {
    try {
        // Create a session
        const sessionRequest = await scrappey.createSession();
        const { session } = sessionRequest;

        console.log('Created Session:', session);

        // Make a GET request
        const getRequestResult = await scrappey.getRequest({
            url: 'https://reqres.in/api/users',
            session,
        });
        console.log('GET Request Result:', getRequestResult);

        // Make a POST request using FormData
        const postFormData = { username: 'user123', password: 'pass456' };
        const postRequestResultForm = await scrappey.postRequest({
            url: 'https://reqres.in/api/users',
            postData: getQueryString(postFormData),
            session
        });
        console.log('POST Request Result (FormData):', postRequestResultForm);

        // Make a POST request using JSON data
        const postJsonData = { email: '[email protected]', password: 'pass123' };
        const postRequestResultJson = await scrappey.postRequest({
            url: 'https://reqres.in/api/users',
            postData: JSON.stringify(postJsonData),
            customHeaders: {
                'Content-Type': 'application/json', // Optional. To avoid issues please still add if you send JSON Data.
                // 'auth': 'token'
            },
            session,
            // proxyCountry: "UnitedStates"
            // & more!
        });
        console.log('POST Request Result (JSON):', postRequestResultJson);

        // Manually destroy the session (automatically destroys after 4 minutes)
        await scrappey.destroySession(session);
        console.log('Session destroyed.');
    } catch (error) {
        console.error(error);
    }
}

run();

Scrappey Wrapper Features:

  • Client-side error correction and handling
  • Easy session management with session creation and destruction
  • Support for GET and POST requests
  • Support for both FormData and JSON data formats
  • Customizable headers for requests
  • Robust and user-friendly
  • JSDocs supported

For more information, please visit the official Scrappey documentation. 📚

License

This project is licensed under the MIT License.

Additional Tags

cloudflare anti bot bypass, cloudflare solver, scraper, scraping, cloudflare scraper, cloudflare turnstile solver, turnstile solver, data extraction, web scraping, website scraping, data scraping, scraping tool, API scraping, scraping solution, web data extraction, website data extraction, web scraping library, website scraping library, cloudflare bypass, scraping API, web scraping API, cloudflare protection, data scraping tool, scraping service, cloudflare challenge solver, web scraping solution, web scraping service, cloudflare scraping, cloudflare bot protection, scraping framework, scraping library, cloudflare bypass tool, cloudflare anti-bot, cloudflare protection bypass, cloudflare solver tool, web scraping tool, data extraction library, website scraping tool, cloudflare turnstile bypass, cloudflare anti-bot solver, turnstile solver tool, cloudflare scraping solution, website data scraper, cloudflare challenge bypass, web scraping framework, cloudflare challenge solver tool, web data scraping, data scraper, scraping data from websites, SEO, data mining, data harvesting, data crawling, web scraping software, website scraping tool, web scraping framework, data extraction tool, web data scraper, data scraping service, scraping automation, scraping tutorial, scraping code, scraping techniques, scraping best practices, scraping scripts, scraping tutorial, scraping examples, scraping challenges, scraping tricks, scraping tips, scraping tricks, scraping strategies, scraping methods, cloudflare protection bypass, cloudflare security bypass, web scraping Python, web scraping JavaScript, web scraping PHP, web scraping Ruby, web scraping Java, web scraping C#, web scraping Node.js, web scraping BeautifulSoup, web scraping Selenium, web scraping Scrapy, web scraping Puppeteer, web scraping requests, web scraping headless browser, web scraping dynamic content, web scraping AJAX, web scraping pagination, web scraping authentication, web scraping cookies, web scraping session management, web scraping data parsing, web scraping data cleaning, web scraping data analysis, web scraping data visualization, web scraping legal issues, web scraping ethics, web scraping compliance, web scraping regulations, web scraping IP blocking, web scraping anti-scraping measures, web scraping proxy, web scraping CAPTCHA solving, web scraping IP rotation, web scraping rate limiting, web scraping data privacy, web scraping consent, web scraping terms of service, web scraping robots.txt, web scraping data storage, web scraping database integration, web scraping data integration, web scraping API integration, web scraping data export, web scraping data processing, web scraping data transformation, web scraping data enrichment, web scraping data validation, web scraping error handling, web scraping scalability, web scraping performance optimization, web scraping distributed scraping, web scraping cloud-based scraping, web scraping serverless scraping, akamai, datadome, perimetex, shape, kasada, queue-it, incapsula.