npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

opencode-crawl4ai

v0.1.0

Published

OpenCode plugin for unrestricted web access via crawl4ai — fetch, search, extract, screenshot, crawl, map

Readme

opencode-crawl4ai

OpenCode plugin that gives AI agents unrestricted web access via crawl4ai.

Fetch URLs, search the web, extract structured data, take screenshots, deep crawl sites, and discover URLs — all from inside OpenCode.

Features

  • Fetch — Retrieve any URL as clean markdown or raw HTML, with stealth mode and JS execution
  • Search — Web search via SearXNG (primary) or DuckDuckGo (fallback, no setup needed)
  • Extract — Structured data extraction using CSS selectors
  • Screenshot — Capture full-page screenshots as base64
  • Crawl — Deep crawl websites with BFS/DFS strategies
  • Map — Discover all URLs on a site

Requirements

  • OpenCode
  • Python 3.10+ with uvx (pip install uv)
  • Docker (optional, for SearXNG faster search)

Installation

bunx opencode-crawl4ai --install

Or with npx:

npx opencode-crawl4ai --install

Copies the plugin to ~/.config/opencode/plugins/. Restart OpenCode to activate.

To install globally:

npm install -g opencode-crawl4ai
opencode-crawl4ai --install

Optional: faster search with SearXNG

opencode-crawl4ai searxng          # starts SearXNG on port 8888
export SEARXNG_URL=http://localhost:8888

SearXNG aggregates Google, Bing, DuckDuckGo, and more. Without it, the plugin falls back to DuckDuckGo directly.

CLI Commands

opencode-crawl4ai --install       Copy plugin to ~/.config/opencode/plugins/
opencode-crawl4ai --uninstall     Remove plugin
opencode-crawl4ai searxng [port]  Start SearXNG Docker container (default: 8888)
opencode-crawl4ai searxng-stop    Stop SearXNG container
opencode-crawl4ai --help          Show help

Available Tools

Once installed, these tools are available to the AI in OpenCode:

crawl4ai_fetch

Fetch a URL and return its content as markdown (default) or HTML.

crawl4ai_fetch({ url: "https://docs.example.com" })
crawl4ai_fetch({ url: "https://example.com", format: "html" })
crawl4ai_fetch({ url: "https://spa.example.com", wait_for: ".content-loaded" })
crawl4ai_fetch({ url: "https://example.com", js_code: "document.querySelector('.show-more').click()" })

crawl4ai_search

Search the web and return results with URL, title, and snippet.

crawl4ai_search({ query: "React hooks tutorial" })
crawl4ai_search({ query: "Python asyncio", limit: 5 })

crawl4ai_extract

Extract structured data from a URL using CSS selectors.

crawl4ai_extract({
  url: "https://example.com/product",
  schema: { title: "h1.product-name", price: ".price" }
})

crawl4ai_screenshot

Take a screenshot of a web page. Returns base64-encoded image data URL.

crawl4ai_screenshot({ url: "https://example.com" })
crawl4ai_screenshot({ url: "https://example.com", width: 1920, height: 1080 })

crawl4ai_crawl

Deep crawl a website starting from a URL, following links up to max_pages and max_depth.

crawl4ai_crawl({ url: "https://docs.example.com", max_pages: 20 })
crawl4ai_crawl({ url: "https://example.com", strategy: "bfs", max_depth: 2 })

crawl4ai_map

Discover all URLs on a website.

crawl4ai_map({ url: "https://example.com" })
crawl4ai_map({ url: "https://example.com", search: "pricing" })

crawl4ai_version

Get the installed crawl4ai version.

crawl4ai_debug

Debug the plugin and bridge connection.

Environment Variables

| Variable | Description | Default | |----------|-------------|---------| | SEARXNG_URL | URL of a SearXNG instance | Falls back to DuckDuckGo |

How It Works

The plugin's TypeScript layer spawns a Python bridge (uvx --with crawl4ai --with ddgs python bridge.py) on each tool call. No persistent Python process is required.

License

MIT