webpeel
v0.7.0
Published
Fast web fetcher for AI agents - stealth mode, crawl mode, page actions, structured extraction, PDF parsing, smart escalation from simple HTTP to headless browser
Maintainers
Readme
WebPeel
Turn any web page into clean markdown. Smart escalation. Stealth mode. Crawl mode. Free to start.
npx webpeel https://news.ycombinator.comOutput:
# Hacker News
**New** | **Past** | **Comments** | **Ask** | **Show** | **Jobs** | **Submit**
## Top Stories
1. **Show HN: WebPeel – Turn any webpage into AI-ready markdown**
[https://github.com/JakeLiuMe/webpeel](https://github.com/JakeLiuMe/webpeel)
142 points by jakeliu 2 hours ago | 31 comments
2. **The End of the API Era**
...Why WebPeel?
| | WebPeel | Firecrawl | Jina Reader | MCP Fetch |
|---|:---:|:---:|:---:|:---:|
| Free tier | ✅ 125/week | 500 one-time | ❌ Cloud only | ✅ Unlimited |
| Smart escalation | ✅ HTTP→Browser→Stealth | Manual mode | ❌ No | ❌ No |
| Stealth mode | ✅ All plans | ✅ Yes | ⚠️ Limited | ❌ No |
| Crawl + Map | ✅ All plans | ✅ Yes | ❌ No | ❌ No |
| AI Extraction | ✅ BYOK (any LLM) | ✅ Built-in | ❌ No | ❌ No |
| Branding | ✅ Design system | ✅ Yes | ❌ No | ❌ No |
| Change Tracking | ✅ Local snapshots | ✅ Server-side | ❌ No | ❌ No |
| Python SDK | ✅ Zero deps | ✅ httpx/pydantic | ❌ No | ❌ No |
| LangChain | ✅ Official | ✅ Official | ❌ No | ❌ No |
| MCP Server | ✅ Built-in (6 tools) | ✅ Separate repo | ❌ No | ✅ Yes |
| Token Budget | ✅ --max-tokens | ❌ No | ❌ No | ❌ No |
| Zero config | ✅ npx webpeel | ❌ API key required | ❌ API key required | ✅ Yes |
| Pricing | $0 local / $9-$29 | $16-$333/mo | $10/mo+ | Free |
| License | MIT | AGPL-3.0 | Proprietary | MIT |
WebPeel gives you Firecrawl's power with a generous free tier and MIT license.
Usage Model
WebPeel uses a weekly usage budget for all users (CLI and API):
- First 25 fetches: No account needed — try it instantly
- Free tier: 125 fetches/week (resets every Monday)
- Pro tier: 1,250 fetches/week ($9/mo)
- Max tier: 6,250 fetches/week ($29/mo)
Credit costs: Basic fetch = 1 credit, Stealth mode = 5 credits, Search = 1 credit, Crawl = 1 credit/page
Open source: The CLI is MIT licensed — you can self-host if needed. But the hosted API requires authentication after 25 fetches.
Highlights
- 🎭 Stealth Mode — Bypass bot detection with playwright-extra stealth plugin. Works on sites that block regular scrapers.
- 🕷️ Crawl Mode — Follow links and extract entire sites. Respects robots.txt and rate limits automatically.
- 💰 Generous Free Tier — 125 free fetches every week. First 25 work instantly with no signup. Basic fetch + JS rendering included free.
Quick Start
CLI (Zero Install)
# First 25 fetches work instantly, no signup
npx webpeel https://example.com
# After 25 fetches, sign up for free (125/week)
webpeel login
# Check your usage
webpeel usage
# Stealth mode (bypass bot detection)
npx webpeel https://protected-site.com --stealth
# Page actions: click, scroll, type before extraction (v0.4.0)
npx webpeel https://example.com --action "click:.cookie-accept" --action "wait:2000" --action "scroll:bottom"
# Structured data extraction with CSS selectors (v0.4.0)
npx webpeel https://example.com --extract '{"title": "h1", "price": ".price", "description": ".desc"}'
# Token budget: truncate output to max tokens (v0.4.0)
npx webpeel https://example.com --max-tokens 2000
# Map discovery: find all URLs on a domain via sitemap & crawling (v0.4.0)
npx webpeel map https://example.com --max-urls 5000
# Extract branding/design system from a page (v0.5.0)
npx webpeel brand https://example.com
# Track content changes over time (v0.5.0)
npx webpeel track https://example.com
# Crawl a website (follow links, respect robots.txt)
npx webpeel crawl https://example.com --max-pages 20 --max-depth 2
# Sitemap-first crawl with content deduplication (v0.4.0)
npx webpeel crawl https://example.com --sitemap-first --max-pages 100
# JSON output with metadata
npx webpeel https://example.com --json
# Cache results locally (avoid repeat fetches)
npx webpeel https://example.com --cache 5m
# Extract just the links from a page
npx webpeel https://example.com --links
# Extract just the metadata (title, description, author)
npx webpeel https://example.com --meta
# Batch fetch from file or stdin
cat urls.txt | npx webpeel batch
# Force browser rendering (for JS-heavy sites)
npx webpeel https://x.com/elonmusk --render
# Wait for dynamic content
npx webpeel https://example.com --render --wait 3000
# View your config and cache stats
webpeel configLibrary (TypeScript)
npm install webpeelimport { peel } from 'webpeel';
// Simple usage
const result = await peel('https://example.com');
console.log(result.content); // Clean markdown
console.log(result.metadata); // { title, description, author, ... }
console.log(result.tokens); // Estimated token count
// Branding extraction (v0.5.0)
const brand = await peel('https://stripe.com', { branding: true, render: true });
console.log(brand.branding); // { colors, fonts, typography, cssVariables, ... }
// Change tracking (v0.5.0)
const tracked = await peel('https://example.com/pricing', { changeTracking: true });
console.log(tracked.changeTracking); // { changeStatus: 'new' | 'same' | 'changed', diff: ... }
// AI extraction with your own LLM key (v0.5.0)
const extracted = await peel('https://example.com', {
extract: { prompt: 'Extract the pricing plans', llmApiKey: 'sk-...' },
});
console.log(extracted.extracted);Python SDK (v0.5.0)
pip install webpeelfrom webpeel import WebPeel
client = WebPeel() # Free tier, no API key needed
# Scrape
result = client.scrape("https://example.com")
print(result.content) # Clean markdown
# Search
results = client.search("python web scraping")
# Crawl (async job)
job = client.crawl("https://docs.example.com", limit=100)
status = client.get_job(job.id)Zero dependencies. Pure Python 3.8+ stdlib. Full docs →
MCP Server (Claude Desktop, Cursor, VS Code, Windsurf)
WebPeel provides six MCP tools: webpeel_fetch, webpeel_search, webpeel_crawl, webpeel_map, webpeel_extract, and webpeel_batch.
Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"webpeel": {
"command": "npx",
"args": ["-y", "webpeel", "mcp"]
}
}
}Cursor
Add to Cursor Settings → MCP Servers:
{
"mcpServers": {
"webpeel": {
"command": "npx",
"args": ["-y", "webpeel", "mcp"]
}
}
}VS Code (with Cline or other MCP clients)
Create or edit ~/.vscode/mcp.json:
{
"mcpServers": {
"webpeel": {
"command": "npx",
"args": ["-y", "webpeel", "mcp"]
}
}
}Or install with one click:
Windsurf
Add to ~/.codeium/windsurf/mcp_config.json:
{
"mcpServers": {
"webpeel": {
"command": "npx",
"args": ["-y", "webpeel", "mcp"]
}
}
}Use with Claude Code
One command to add WebPeel to Claude Code:
claude mcp add webpeel -- npx -y webpeel mcpOr add to your project's .mcp.json for team sharing:
{
"mcpServers": {
"webpeel": {
"command": "npx",
"args": ["-y", "webpeel", "mcp"]
}
}
}This gives Claude Code access to:
- webpeel_fetch — Fetch any URL as clean markdown (with stealth mode, actions, extraction & token budget)
- webpeel_search — Search the web via DuckDuckGo
- webpeel_batch — Fetch multiple URLs concurrently
- webpeel_crawl — Crawl websites following links (with sitemap-first & deduplication)
- webpeel_map — Discover all URLs on a domain via sitemap.xml & link crawling
- webpeel_extract — Extract structured data using CSS selectors or JSON schema
How It Works: Smart Escalation
WebPeel tries the fastest method first, then escalates only when needed:
┌─────────────────────────────────────────────────────────────┐
│ Smart Escalation │
└─────────────────────────────────────────────────────────────┘
Simple HTTP Fetch → Browser Rendering → Stealth Mode
~200ms ~2 seconds ~5 seconds
│ │ │
├─ User-Agent headers ├─ Full JS execution ├─ Anti-detect
├─ Cheerio parsing ├─ Wait for content ├─ Fingerprint mask
├─ Fast & cheap ├─ Screenshots ├─ Cloudflare bypass
│ │ │
▼ ▼ ▼
Works for 80% Works for 15% Works for 5%
of websites (JS-heavy sites) (bot-protected)Why this matters:
- Speed: Don't waste 2 seconds rendering when 200ms will do
- Cost: Headless browsers burn CPU and memory
- Reliability: Auto-retry with browser if simple fetch fails
WebPeel automatically detects blocked requests (403, 503, Cloudflare challenges) and retries with browser mode. You get the best of both worlds.
API Reference
peel(url, options?)
Fetch and extract content from a URL.
interface PeelOptions {
render?: boolean; // Force browser mode (default: false)
wait?: number; // Wait time after page load in ms (default: 0)
format?: 'markdown' | 'text' | 'html'; // Output format (default: 'markdown')
timeout?: number; // Request timeout in ms (default: 30000)
userAgent?: string; // Custom user agent
}
interface PeelResult {
url: string; // Final URL (after redirects)
title: string; // Page title
content: string; // Page content in requested format
metadata: { // Extracted metadata
description?: string;
author?: string;
published?: string; // ISO 8601 date
image?: string; // Open Graph image
canonical?: string;
};
links: string[]; // All links on page (absolute URLs)
tokens: number; // Estimated token count
method: 'simple' | 'browser'; // Method used
elapsed: number; // Time taken (ms)
}Error Types
import { TimeoutError, BlockedError, NetworkError } from 'webpeel';
try {
const result = await peel('https://example.com');
} catch (error) {
if (error instanceof TimeoutError) {
// Request timed out
} else if (error instanceof BlockedError) {
// Site blocked the request (403, Cloudflare, etc.)
} else if (error instanceof NetworkError) {
// Network/DNS error
}
}cleanup()
Clean up browser resources. Call this when you're done using WebPeel in your application:
import { peel, cleanup } from 'webpeel';
// ... use peel() ...
await cleanup(); // Close browser instancesHosted API
Live at https://api.webpeel.dev — authentication required after first 25 fetches.
# Register and get your API key
curl -X POST https://api.webpeel.dev/v1/auth/register \
-H "Content-Type: application/json" \
-d '{"email":"[email protected]","password":"your-password"}'
# Fetch a page
curl "https://api.webpeel.dev/v1/fetch?url=https://example.com" \
-H "Authorization: Bearer wp_live_your_api_key"Pricing — Weekly Reset Model
Usage resets every Monday at 00:00 UTC, just like Claude Code.
| Plan | Price | Weekly Fetches | Burst Limit | All Features | Extra Usage | |------|------:|---------------:|:-----------:|:------------:|:-----------:| | Free | $0 | 125/wk (~500/mo) | 25/hr | ✅ | ❌ | | Pro | $9/mo | 1,250/wk (~5K/mo) | 100/hr | ✅ | ✅ | | Max | $29/mo | 6,250/wk (~25K/mo) | 500/hr | ✅ | ✅ |
Three layers of usage control:
- Burst limit — Per-hour cap (25/hr free, 100/hr Pro, 500/hr Max) prevents hammering
- Weekly limit — Main usage gate, resets every Monday
- Extra usage — When you hit your weekly limit, keep fetching at pay-as-you-go rates
Extra usage rates (Pro/Max only): | Fetch Type | Cost | |-----------|------| | Basic (HTTP) | $0.002 | | Stealth (browser) | $0.01 | | Search | $0.001 |
Why WebPeel Beats Firecrawl
| Feature | WebPeel Free | WebPeel Pro | Firecrawl Hobby | |---------|:-------------:|:-----------:|:---------------:| | Price | $0 | $9/mo | $16/mo | | Weekly Fetches | 125/wk | 1,250/wk | ~750/wk | | Rollover | ❌ | ✅ 1 week | ❌ Expire monthly | | Soft Limits | ✅ Degrades | ✅ Never locked out | ❌ Hard cut-off | | Extra Usage | ❌ | ✅ Pay-as-you-go | ❌ Upgrade only | | Self-Host | ✅ MIT | ✅ MIT | ❌ AGPL |
Key differentiators:
- Like Claude Code — Generous free tier (125/week), pay when you need more
- Weekly resets — Your usage refreshes every Monday, not once a month
- Soft limits on every tier — At 100%, we degrade gracefully instead of blocking you
- Extra usage — Pro/Max users can toggle on pay-as-you-go with spending caps (no surprise bills)
- First 25 free — Try it instantly, no signup required
- Open source — MIT licensed, self-host if you want full control
See pricing at webpeel.dev
Examples
Extract blog post metadata
const result = await peel('https://example.com/blog/post');
console.log(result.metadata);
// {
// title: "How We Built WebPeel",
// description: "A deep dive into smart escalation...",
// author: "Jake Liu",
// published: "2026-02-12T18:00:00Z",
// image: "https://example.com/og-image.png"
// }Get all links from a page
const result = await peel('https://news.ycombinator.com');
console.log(result.links.slice(0, 5));
// [
// "https://news.ycombinator.com/newest",
// "https://news.ycombinator.com/submit",
// "https://github.com/example/repo",
// ...
// ]Force browser rendering for JavaScript-heavy sites
// Twitter/X requires JavaScript
const result = await peel('https://x.com/elonmusk', {
render: true,
wait: 2000, // Wait for tweets to load
});
console.log(result.content); // Rendered tweet contentToken counting for LLM usage
const result = await peel('https://example.com/long-article');
console.log(`Content is ~${result.tokens} tokens`);
// Content is ~3,247 tokens
if (result.tokens > 4000) {
console.log('Too long for GPT-3.5 context window');
}Use Cases
- AI Agents: Feed web content to Claude, GPT, or local LLMs
- Research: Bulk extract articles, docs, or social media
- Monitoring: Track content changes on websites
- Archiving: Save web pages as clean markdown
- Data Pipelines: Extract structured data from web sources
Development
# Clone the repo
git clone https://github.com/JakeLiuMe/webpeel.git
cd webpeel
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
# Watch mode (auto-rebuild)
npm run dev
# Test the CLI locally
node dist/cli.js https://example.com
# Test the MCP server
npm run mcpSee CONTRIBUTING.md for guidelines.
Roadmap
- [x] CLI with smart escalation
- [x] TypeScript library
- [x] MCP server for Claude/Cursor/VS Code
- [x] Hosted API with authentication and usage tracking
- [x] Rate limiting and caching
- [x] Batch processing API (
batch <file>) - [x] Screenshot capture (
--screenshot) - [x] CSS selector filtering (
--selector,--exclude) - [x] DuckDuckGo search (
search <query>) - [x] Custom headers and cookies
- [x] Weekly reset usage model with extra usage
- [x] Stealth mode (playwright-extra + anti-detect)
- [x] Crawl mode (follow links, respect robots.txt)
- [x] PDF extraction (v0.4.0)
- [x] Structured data extraction with CSS selectors and JSON schema (v0.4.0)
- [x] Page actions: click, scroll, type, fill, select, press, hover (v0.4.0)
- [x] Map/sitemap discovery for full site URL mapping (v0.4.0)
- [x] Token budget for output truncation (v0.4.0)
- [x] Advanced crawl: sitemap-first, BFS/DFS, content deduplication (v0.4.0)
- [ ] Webhook notifications for monitoring
Vote on features and roadmap at GitHub Discussions.
FAQ
Q: How is this different from Firecrawl?
A: WebPeel has a more generous free tier (125/week vs Firecrawl's 500 one-time credits) and uses weekly resets like Claude Code. We also have smart escalation to avoid burning resources on simple pages.
Q: Can I self-host the API server?
A: Yes! Run npm run serve to start the API server. See docs/self-hosting.md (coming soon).
Q: Does this violate websites' Terms of Service?
A: WebPeel is a tool — how you use it is up to you. Always check a site's ToS before fetching at scale. We recommend respecting robots.txt in your own workflows.
Q: What about Cloudflare and bot protection?
A: WebPeel handles most Cloudflare challenges automatically via stealth mode (available on all plans). For heavily protected sites, stealth mode uses browser fingerprint randomization to bypass detection.
Q: Can I use this in production?
A: Yes! The hosted API at https://api.webpeel.dev is production-ready with authentication, rate limiting, and usage tracking.
Credits
Built with:
- Playwright — Headless browser automation
- Cheerio — Fast HTML parsing
- Turndown — HTML to Markdown conversion
- Commander — CLI framework
Contributing
Contributions are welcome! See CONTRIBUTING.md for guidelines.
License
MIT © Jake Liu
Like WebPeel? ⭐ Star us on GitHub — it helps others discover the project!
