npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@apify/apify-openclaw-plugin

v0.1.0

Published

Web scraping and AI-powered data extraction via Apify for OpenClaw — market research, competitor intelligence, trend analysis, lead generation, e-commerce, social media analytics, and more.

Readme

Apify Plugin for OpenClaw

Universal web scraping and data extraction via Apify — 57+ Actors across Instagram, Facebook, TikTok, YouTube, Google Maps, Google Search, e-commerce, and more.

Install

openclaw plugins install @apify/apify-openclaw-plugin

Restart the Gateway after installation.

How it works

The plugin registers a single tool — apify — with three actions:

| Action | Purpose | |--------|---------| | discover + query | Search the Apify Store for Actors by keyword | | discover + actorId | Fetch an Actor's input schema + README | | start + actorId + input | Run any Apify Actor, returns runId / datasetId | | collect + runs | Poll status and return results for completed runs |

The tool uses a two-phase async pattern: start fires off a run and returns immediately. collect fetches results when the run completes. The agent does other work in between.

Get an API key

  1. Create an Apify account at https://console.apify.com/
  2. Generate an API token in Account Settings → Integrations.
  3. Store it in plugin config or set the APIFY_API_KEY environment variable.

Configure

{
  plugins: {
    entries: {
      "apify": {
        config: {
          apiKey: "apify_api_...",     // optional if APIFY_API_KEY env var is set
          baseUrl: "https://api.apify.com",
          maxResults: 20,
          enabledTools: [],           // empty = all tools enabled
        },
      },
    },
  },
  // Make the tool available to agents:
  tools: {
    alsoAllow: ["apify"],   // or "apify" or "group:plugins"
  },
}

Or use the interactive setup wizard:

openclaw apify setup

apify

Workflow

discover (search) → discover (schema) → start → collect
  1. Search — Find Actors: { action: "discover", query: "amazon price scraper" }
  2. Schema — Get input params: { action: "discover", actorId: "apify~google-search-scraper" }
  3. Start — Run the Actor: { action: "start", actorId: "apify~google-search-scraper", input: { queries: ["OpenAI"] } }
  4. Collect — Get results: { action: "collect", runs: [{ runId: "...", actorId: "...", datasetId: "..." }] }

Actor ID format

Actor IDs use the username~actor-name format (tilde separator, not slash).

Known Actors

The tool description includes 57+ known Actors across these categories:

  • Instagram — profiles, posts, comments, hashtags, reels, search, followers, tagged posts
  • Facebook — pages, posts, comments, likes, reviews, groups, events, ads, reels, photos, marketplace
  • TikTok — search, profiles, videos, comments, followers, hashtags, sounds, ads, trends, live
  • YouTube — search, channels, comments, shorts, video-by-hashtag
  • Google Maps — places, reviews, email extraction
  • Other — Google Search, Google Trends, Booking.com, TripAdvisor, contact info, e-commerce

Batching

Most Actors accept arrays of URLs/queries in their input (e.g., startUrls, queries). Always batch multiple targets into a single run — one run with 5 URLs is cheaper and faster than 5 separate runs.

Examples

// 1. Search the Apify Store
const search = await apify({
  action: "discover",
  query: "linkedin company scraper",
});

// 2. Get an Actor's input schema
const schema = await apify({
  action: "discover",
  actorId: "compass~crawler-google-places",
});

// 3. Start a Google Search scrape
const started = await apify({
  action: "start",
  actorId: "apify~google-search-scraper",
  input: { queries: ["OpenAI", "Anthropic"], maxPagesPerQuery: 1 },
  label: "search",
});
// -> { runs: [{ runId, actorId, datasetId, status }] }

// 4. Collect results
const results = await apify({
  action: "collect",
  runs: started.runs,
});
// -> { completed: [...], pending: [...] }

// Instagram profile scraping
await apify({
  action: "start",
  actorId: "apify~instagram-profile-scraper",
  input: { usernames: ["natgeo", "nasa"] },
});

// TikTok search
await apify({
  action: "start",
  actorId: "clockworks~tiktok-scraper",
  input: { searchQueries: ["AI tools"], resultsPerPage: 20 },
});

Sub-agent delegation

The tool description instructs agents to delegate apify calls to a sub-agent. The sub-agent handles the full discover → start → collect workflow and returns only the relevant extracted data — not raw API responses or run metadata.

Security

  • API keys are resolved from plugin config or APIFY_API_KEY env var — never logged or included in output.
  • Base URL validation — only https://api.apify.com prefix is allowed (SSRF prevention).
  • External content wrapping — all scraped results are wrapped with untrusted content markers.

Development

# Install dependencies
npm install

# Type check
npx tsc --noEmit

# Run tests
npx vitest run

# Pack (dry run)
npm pack --dry-run

Support

For issues with this integration, contact [email protected].

License

MIT