npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@soshilabs/research

v1.1.1

Published

AI-powered research CLI: web search, scraping, briefs, academic search, SEO audits, and more. Powered by Google Gemini and Firecrawl.

Downloads

318

Readme

Research Client

AI-powered research CLI and web UI for market research, competitive intelligence, web scraping, SEO audits, academic search, and automated briefing generation.

Powered by Google Gemini 2.5 Pro and Firecrawl.

npm version

Install

npm install -g research-client

Or run without installing:

npx research-client

Quick Start

# Initialize config and set up your Google AI API key
research init

# Start an interactive AI chat
research chat

# Search the web
research scrape search "agent harness architecture"

# Scrape a URL to markdown
research scrape url https://example.com

# Ask a research question
research research-query ask "What is the current state of AI regulation?"

# Launch the web UI
research ui -p 3000

Commands

Setup & Auth

| Command | Description | |---------|-------------| | research init | Initialize config directories and API key | | research login | Authenticate with Google AI (browser or paste key) | | research config | View current configuration | | research config get <key> | Get a specific config value | | research ui -p <port> | Launch the Next.js web UI |

Interactive Chat

| Command | Description | |---------|-------------| | research chat | Start an AI chat session in terminal | | research chat -m <model> | Use a specific Gemini model | | research chat -s <prompt> | Use a custom system prompt |

Briefs (Research Reports)

Briefs are structured research reports that can be scheduled to auto-refresh.

| Command | Description | |---------|-------------| | research briefs list | List all briefs | | research briefs show <id> | Show full brief details | | research briefs create <title> | Create a new brief | | research briefs update <id> | Update title, topic, summary, or status | | research briefs delete <id> | Delete a brief | | research briefs settings <id> | View/update brief settings | | research briefs schedule <id> | View/set schedule (hourly/daily/weekly) | | research briefs add-source <briefId> <sourceId> | Add a source with weight | | research briefs remove-source <briefId> <sourceId> | Remove a source | | research briefs sources <briefId> | List sources with weights | | research briefs set-weight <briefId> <sourceId> <weight> | Set source weight (0-5) | | research briefs add-peer-repo <briefId> <name> <url> | Add academic peer review repo | | research briefs remove-peer-repo <briefId> <repoId> | Remove peer review repo | | research briefs peer-repos <briefId> | List peer review repos | | research briefs export <id> | Export brief as markdown | | research briefs export <id> -o file.md | Export to file | | research briefs runs <briefId> | List scheduled runs | | research briefs run-show <runId> | Show full run content |

Sources

| Command | Description | |---------|-------------| | research sources list | List all sources | | research sources add <url> | Add a source URL | | research sources remove <id> | Remove a source | | research sources scrape <id> | Scrape a source via Firecrawl |

Spaces

| Command | Description | |---------|-------------| | research spaces list | List research spaces | | research spaces create <name> -f <areas...> | Create with focus areas | | research spaces remove <id> | Remove a space |

Trends

| Command | Description | |---------|-------------| | research trends list | List trend monitors | | research trends add <keyword> | Add a trend monitor | | research trends remove <id> | Remove a trend |

Web Scraping

| Command | Description | |---------|-------------| | research scrape url <url> | Scrape a URL to markdown | | research scrape url <url> --json | Output raw JSON | | research scrape search <query> | Search the web via Firecrawl | | research scrape search <query> --scrape | Search + scrape top results | | research scrape crawl <url> -l <limit> | Crawl a domain |

AI Research Queries

| Command | Description | |---------|-------------| | research research-query ask <question> | AI-powered research question | | research research-query company <name> | Company analysis | | research research-query market <topic> | Market analysis |

SEO Audits

| Command | Description | |---------|-------------| | research seo audit <url> | AI-powered SEO audit | | research seo lighthouse <url> | Full Lighthouse audit (local Chrome) | | research seo robots <url> | Check robots.txt | | research seo sitemap <url> | Parse sitemap.xml |

Academic Search

| Command | Description | |---------|-------------| | research academic search <query> | Search Semantic Scholar, CrossRef, PubMed | | research academic search <query> --source pubmed | Search single database | | research academic paper <doi> | Get paper details by DOI |

Scheduling

| Command | Description | |---------|-------------| | research cron run | Manually trigger brief scheduler | | research cron status | Check scheduled brief status |

API Keys

| Key | Required | Purpose | |-----|----------|---------| | GOOGLE_GENERATIVE_AI_API_KEY | Yes | Powers AI chat, research queries, and brief generation | | FIRECRAWL_API_KEY | Recommended | Web search, scraping, and crawling |

Get your keys:

Keys are stored in ~/.research/.env.

Web UI

The research client includes a full Next.js web UI with:

  • AI chat with 40+ tools (scraping, research, SEO, academic, briefs)
  • Brief management with scheduled auto-refresh
  • Source library and trend monitoring
  • SEO audit dashboard
  • Academic peer review matching
research ui -p 3000

Data Storage

All data is stored locally in ~/.research/data/ as JSON files:

  • briefs.json — Research briefs
  • brief-runs.json — Scheduled brief outputs
  • sources.json — Source URLs and excerpts
  • spaces.json — Research workspaces
  • trends.json — Trend monitors
  • chat.json — Chat history
  • settings.json — Preferences

Tech Stack

  • CLI: Commander.js, Chalk, Inquirer
  • AI: Google Gemini 2.5 Pro via @ai-sdk/google
  • Scraping: Firecrawl API
  • Academic: Semantic Scholar, CrossRef, PubMed
  • SEO: Google PageSpeed Insights, Lighthouse
  • Web UI: Next.js 15, React 19, Tailwind CSS, Vercel AI SDK

License

MIT