npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

subto-one

v2.0.0

Published

AI-Powered Website Analysis Platform - 100% Free Forever

Readme

Subto.One

Comprehensive Website Analysis & AI Code Surgery - 100% Free Forever

Features

  • Deep Runtime Analysis: Full Playwright-based crawling with JavaScript execution
  • Network Interception: Captures all requests, responses, and timing data
  • Interaction Simulation: Programmatically tests buttons, inputs, and interactive elements
  • Lighthouse Integration: Performance, accessibility, SEO, and best practices scoring
  • SEO & Markup Validation: W3C Validator, Google Mobile-Friendly Test integration
  • Performance Testing: Google PageSpeed, WebPageTest, GTmetrix integration
  • Security Audit: Mozilla Observatory, Security Headers, SSL Labs, Safe Browsing integration
  • Malware Detection: VirusTotal, Hybrid Analysis, URLScan.io support
  • AI API Selection: Ask AI to choose the best scanning API for your needs
  • No-JS Differential: Compares JS-enabled vs disabled behavior
  • AI Code Surgeon: Upload your code and get AI-powered fixes

Quick Start

Prerequisites

  • Node.js 18+
  • npm or yarn

Installation

# Install dependencies
npm install

# Install Playwright browsers
npx playwright install chromium

# Copy environment variables
cp .env.example .env

Configuration

Edit .env and add your OpenRouter API key for AI features:

OPENROUTER_API_KEY=your_key_here

Get a free API key at OpenRouter

Optional API Keys

Add these to your .env for enhanced scanning capabilities (all have generous free tiers):

# Performance & Speed (Scalable Free Options)
GOOGLE_PAGESPEED_API_KEY=your_key      # Google PageSpeed Insights
WEBPAGETEST_API_KEY=your_key           # WebPageTest (public instance available)
GOOGLE_MOBILE_FRIENDLY_API_KEY=your_key # Google Mobile-Friendly Test

# SEO & Markup Validation
# W3C Markup Validator (no key needed)  # HTML validation for SEO

# Security & Malware (No/Low Limits)
VIRUSTOTAL_API_KEY=your_key            # VirusTotal (500 requests/day free)
GOOGLE_SAFE_BROWSING_API_KEY=your_key  # Google Safe Browsing
URLSCAN_API_KEY=your_key               # URLScan.io (free tier)
HYBRID_ANALYSIS_API_KEY=your_key       # Hybrid Analysis (free tier)

# Always Free (No API Keys Needed)
# Mozilla Observatory, Security Headers, SSL Labs

All APIs have free tiers. See .env.example for details.

Running

# Development
npm run dev

# Production
npm start

Visit http://localhost:3000

Architecture

quantumreasoning/
├── public/                 # Frontend assets
│   ├── index.html         # Single-page app
│   ├── styles.css         # Exact styling spec
│   └── app.js             # Frontend logic
├── server/
│   ├── index.js           # Express server + WebSocket
│   └── modules/
│       ├── scan-pipeline.js   # 7-phase analysis engine
│       ├── ai-analyzer.js     # OpenRouter AI integration
│       ├── file-manager.js    # Upload/ZIP handling
│       └── data-store.js      # In-memory storage
├── package.json
└── .env.example

Scan Pipeline

  1. Fetch Initial HTML - Loads page with Playwright
  2. Execute JavaScript Runtime - Captures all JS files and builds AST
  3. Intercept Network Requests - Records all network activity
  4. Simulate Interactions - Hovers, clicks, types on interactive elements
  5. Run Lighthouse - Google PageSpeed Insights API
  6. Audit Security - Mozilla Observatory + OWASP checks
  7. No-JS Differential - Compares behavior without JavaScript

API Endpoints

Scan

POST /api/v1/scan
Body: { "url": "https://subto.one" }
Response: { "scanId": "uuid", "status": "started" }

Notes on queuing and rate limiting:

  • Concurrency limit: the server allows up to MAX_CONCURRENT_SCANS (default 50) scans to run concurrently.
  • If the server is at capacity, API clients can opt into an automatic queue by sending the header X-Accept-Queue: true with the POST request. In that case the request will be accepted and queued; the response will be 202 Accepted with JSON { scanId, status: 'queued', queuePosition }.
  • If the client does not opt into queuing and the server is at capacity, the API will return 429 Too Many Requests with a short message instructing the client to retry or set X-Accept-Queue: true.
  • The UI automatically sets X-Accept-Queue: true and displays Queuing plus the user's position (e.g., You are #3 in queue).

Get Results

GET /api/v1/scan/:scanId
Response: Full scan data

AI Analysis

POST /api/v1/ai/analyze
Body: { "scanId": "uuid", "files": [...] }
Response: { "summary": "...", "changes": [...] }

File Upload

Supported Types

  • JavaScript: .js, .ts, .jsx, .tsx
  • Styles: .css, .scss
  • Markup: .html, .vue, .svelte
  • Data: .json, .env, .md

Limits

  • Single file: 50 MB max
  • Total upload: 250 MB max
  • File count: 5,000 files max

Excluded Folders

  • node_modules/
  • .git/
  • .next/, dist/, build/

AI Models (Free Tier)

  • Default: deepseek/deepseek-r1:free
  • Code Analysis: qwen/qwen3-coder:free
  • Security: mistralai/devstral-small:free

Rate Limits

  • 5 seconds between requests
  • 50 requests per day

Data Retention

All scan data is automatically deleted after 24 hours. No user accounts required.

Security

  • No binary file uploads
  • Server-side validation even if client-side passes
  • Streaming uploads to disk, not memory
  • Directory traversal prevention
  • MIME type validation

License

MIT

Deployment (Docker + nginx reverse-proxy)

This repo includes a simple production-ready scaffold under deploy/ that runs the Node app behind an nginx reverse-proxy which terminates TLS and forwards requests (including WebSocket upgrades) to the app.

Quick local test (self-signed cert):

cd deploy
./mk-self-signed.sh        # creates deploy/certs/fullchain.pem and privkey.pem
docker compose up --build

Production notes:

  • Use Node.js 22+ in your runtime (required by Lighthouse v13).
  • Terminate TLS at your load balancer (Cloud Load Balancer, nginx, etc.) and forward plain HTTP to the Node container.
  • Ensure WebSocket upgrades are forwarded by the proxy.
  • Provide secrets via environment variables or your hosting secret manager (do NOT commit secrets).