crawl-search-results
v0.0.5
Published
CLI tool to download search results as a single markdown file using WebCrawlerAPI
Maintainers
Readme
Crawl Search Results

A CLI tool to download and save Google search results using WebCrawlerAPI.
Get full content from the top 10 search results and save them as a single markdown file.
WebCrawlerAPI key is required
Example of the output markdown
Requirements
- Node.js 18 or higher
- WebCrawlerAPI account (free trial available)
How It Works
- Takes your search query as input
- Constructs a Google search URL
- Uses WebCrawlerAPI to crawl the search page and follow up to 10 result links
- Downloads full markdown content from each linked website
- Saves all results in a single organized markdown file
Installation
Quick Start with bunx (No Installation)
bunx crawl-search-results "your search query"Global Installation
npm install -g crawl-search-results
# or
pnpm add -g crawl-search-resultsUsage
First Run
On your first run, you'll be prompted to enter your WebCrawlerAPI key:
$ crawl-search-results "webscraping guide"
╔════════════════════════════════════════════════════════════╗
║ Google Search Results Downloader (WebCrawlerAPI) ║
╚════════════════════════════════════════════════════════════╝
⚠️ No API key found.
Get your free API key at:
🔗 https://app.webcrawlerapi.com/dashboard
Enter your API key: [paste your key here]
✓ API key saved!
⠋ Crawling Google search results for "webscraping guide"...
This may take a few moments as we crawl up to 10 search results.
✓ Successfully crawled 10 results!
📄 Saved to: .webcrawlera/2025-01-05-14-30-45-webscraping-guide.mdSubsequent Runs
After the first run, the tool will use your saved API key:
$ crawl-search-results "best practices"
╔════════════════════════════════════════════════════════════╗
║ Google Search Results Downloader (WebCrawlerAPI) ║
╚════════════════════════════════════════════════════════════╝
⠋ Crawling Google search results for "best practices"...
This may take a few moments as we crawl up to 10 search results.
✓ Successfully crawled 10 results!
📄 Saved to: .webcrawlerapi/2025-01-05-14-35-22-best-practices.mdOutput Format
Results are saved in the .webcrawlerapi/ directory with the following naming format:
.webcrawlerapi/YYYY-MM-DD-HH-MM-SS-query.mdExample:
.webcrawlerapi/2025-01-05-14-30-45-webscraping-guide.mdEach markdown file contains:
- Metadata (crawl timestamp, job ID, number of items)
- Full content from each of the 10 search results
- URL, status code, and depth information for each result
API Key
You need a WebCrawlerAPI key to use this tool. Get your free API key at:
https://app.webcrawlerapi.com/dashboard
Your API key is stored in local directory in
config.json
Examples
# Search for webscraping guides
crawl-search-results "webscraping guide"
# Search for best practices
crawl-search-results "best practices"
# Search with multiple words
crawl-search-results "how to build a REST API"
# Use with bunx (no installation)
bunx crawl-search-results "TypeScript tips"Troubleshooting
Invalid API Key
❌ Error: Invalid API key. Please check your key at:
🔗 https://dash.webcrawlerapi.com/accessSolution: Verify your API key at the dashboard and run the command again.
Network Connection Issues
❌ Error: Failed to connect to WebCrawlerAPI.
Please check your internet connection.Solution: Check your internet connection and try again.
License
MIT
Support
For issues and questions:
- WebCrawlerAPI Documentation: https://webcrawlerapi.com/docs
- WebCrawlerAPI Dashboard: https://app.webcrawlerapi.com/dashboard
