npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@scraping-proxy/auth-cli

v0.5.0

Published

CLI tool that captures browser authentication state (cookies, localStorage) for use with `@scraping-proxy/client`.

Readme

@scraping-proxy/auth-cli

CLI tool that captures browser authentication state (cookies, localStorage) for use with @scraping-proxy/client.

Some sites require a logged-in session that cannot be replicated via HTTP headers alone. This tool opens a real browser, lets you log in manually, then saves the resulting auth state to a local file. That file is passed to client.scrape() and injected into the Playwright context on the server — entirely in memory, never stored.

Prerequisites

Node.js ≥ 20. For Google or other OAuth providers, a real browser must be used:

# Install system Chrome if not already present, then:
pnpm exec playwright install chrome

For non-OAuth sites the bundled Chromium works fine:

pnpm exec playwright install chromium

Usage

scraping-proxy-auth <url> [options]

Arguments:
  url                    Page to open (typically the login page)

Options:
  -o, --output <file>    Where to write the state file  (default: state.json)
  -b, --browser <name>   Browser to use: chromium | chrome | edge  (default: chromium)
  -h, --help             Show help

Basic

scraping-proxy-auth https://example.com/login

Opens a browser at the URL. Log in, then press Enter in the terminal. The auth state is saved to ./state.json.

Google / OAuth sites

Google blocks Playwright's bundled Chromium. Use --browser chrome to launch your system Chrome instead:

scraping-proxy-auth https://accounts.google.com --browser chrome

Custom output path

scraping-proxy-auth https://example.com/login --output ./auth/example.json

Using the state file

Pass the file contents to client.scrape(). The server injects it into the Playwright browser context and discards it after the request — it is never written to disk or stored in the database.

import { readFile } from 'node:fs/promises';
import { ScrapingProxyClient } from '@scraping-proxy/client';
import type { BrowserStorageState } from '@scraping-proxy/client';

const authState: BrowserStorageState = JSON.parse(
	await readFile('./state.json', 'utf-8')
);

const client = new ScrapingProxyClient({
	baseUrl: 'https://your-proxy.example.com',
	apiToken: 'oat_xxx',
});

const { data: job } = await client.scrape({
	url: 'https://example.com/dashboard',
	scrapeMode: 'browser',
	authState,
	selectors: {
		title: { selector: 'h1' },
	},
});

const result = await client.waitForJob(job.jobId);
console.log(result.result);

Security

  • The state file contains session cookies. Treat it like a password — do not commit it to version control.
  • Add state.json (or your custom output path) to .gitignore.
  • The server never persists the auth state: it lives only in the Redis queue payload (ephemeral) and in memory during scrape execution.