npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

research-copilot

v0.2.0

Published

AI-powered research assistant for scientists — literature search, data analysis, academic writing, and project management

Downloads

568

Readme

Research Copilot

An AI-powered desktop research assistant for scientists and academics. Literature search, data analysis, academic writing, and project management — all in one place.

Built on pi-mono (agent runtime) + Electron + React.

Main Interface


API Keys Setup (READ THIS FIRST)

Research Copilot requires API keys to function. The easiest way is to enter them directly in the app — on first launch you'll see a setup screen. Keys are saved to ~/.research-copilot/config.json.

Alternatively, add them to your shell profile (~/.zshrc, ~/.bashrc, etc.):

# ===== REQUIRED (at least one) =====
export OPENAI_API_KEY="sk-..."           # For OpenAI models (GPT-4o, GPT-5, o3, etc.)
export ANTHROPIC_API_KEY="sk-ant-..."    # For Anthropic models (Claude Sonnet, Opus, etc.)

# ===== RECOMMENDED =====
export BRAVE_API_KEY="BSA..."            # For web search (https://brave.com/search/api/)
export OPENROUTER_API_KEY="sk-or-..."    # For AI-generated scientific diagrams (https://openrouter.ai/)

Then reload your shell: source ~/.zshrc

What happens without each key?

| Key | Required? | What it powers | Without it | |-----|-----------|---------------|------------| | OPENAI_API_KEY | Yes (if using OpenAI models) | Core AI agent — all chat, coding, writing, analysis | App cannot start the agent. You'll see an error dialog on first message. | | ANTHROPIC_API_KEY | Yes (if using Anthropic models) | Core AI agent (same as above, for Claude models) | Same — agent won't initialize for Claude models. | | BRAVE_API_KEY | Recommended | web_search tool — general web search via Brave Search | Graceful fallback: web search automatically degrades to arXiv-only (academic papers). No general web results. | | OPENROUTER_API_KEY | Optional | scientific-schematics skill — AI-generated diagrams, flowcharts, graphical abstracts | The schematics skill fails when invoked. All other skills (writing, visualization, data analysis) work fine. |

Minimum viable setup: You need at least one of OPENAI_API_KEY or ANTHROPIC_API_KEY to use the app. Everything else enhances the experience but is not strictly required.

Semantic Scholar, arXiv, OpenAlex, DBLP: These academic APIs are used for literature search and do not require API keys. They work out of the box.


How is Research Copilot different from Claude Cowork?

Claude Cowork is Anthropic's general-purpose autonomous agent for knowledge workers — it handles file organization, document drafting, and data extraction across everyday desktop tasks.

Research Copilot is a vertical tool built specifically for academic research. The two differ in depth, not surface:

| | Claude Cowork | Research Copilot | |---|---|---| | Scope | Horizontal — any knowledge work | Vertical — academic research lifecycle | | Literature | No academic search | Multi-source search (Semantic Scholar, arXiv, OpenAlex, DBLP) with relevance scoring, coverage tracking, and citation tracing | | Paper management | Processes files you already have | Structured artifact system with DOI, bibtex, citeKey, citation counts, and relevance metadata | | Academic writing | Generic document drafting | Venue-specific templates (NeurIPS, ICML, journals), IMRAD structure, LaTeX, citation verification (never hallucinated) | | Grant writing | None | Agency-specific guidance (NSF, NIH, DOE, DARPA, NSTC) with compliance checklists | | Data analysis | Extracts data from documents | LLM-generated Python scripts with statistical modeling, matplotlib/seaborn visualization, and output manifests | | Domain skills | General capabilities | 13 pluggable research skills (scientific writing, visualization, scholar evaluation, etc.) — extensible via Markdown | | Knowledge persistence | Not specified | Artifact store, session summaries, cross-session memory, @-mention references | | Openness | Closed-source commercial product | Open source (MIT) — fully customizable |

In short: Claude Cowork is like a smart office assistant. Research Copilot is like a lab partner who knows how to search literature, run stats, write papers, and apply for grants.


Features

AI Chat with Coding & Writing Tools

Converse with an AI research assistant that can read, write, and edit files in your workspace. It generates LaTeX manuscripts, creates publication-quality figures, runs Python analysis scripts, and manages your project files — all through natural language.

Multi-Source Literature Search

Search across Semantic Scholar, arXiv, OpenAlex, and DBLP simultaneously. Papers are scored for relevance, deduplicated, and organized in a searchable table. Quick actions let you do deep searches, fill coverage gaps, or trace citation chains.

Literature Management

Extensible Skills System

Skills are lazy-loaded knowledge modules that give the AI domain expertise. The app ships with 13 builtin skills covering academic writing (paper-writing, grant proposals, rewrite-humanize), visualization (matplotlib, scientific schematics), data analysis, and more. You can also add your own project-specific skills.

File Attachments in Chat

Attach files directly in the chat input via the paperclip button, drag & drop, or paste. Supported formats:

| Format | How it's processed | |--------|--------------------| | Images (PNG, JPEG, GIF, WebP) | Sent as vision content — the LLM sees the image visually | | Text files (CSV, MD, TXT, JSON, XML, HTML) | Read directly and injected as text into the message | | Documents (PDF, DOCX) | Converted to text via markitdown CLI (with pypdf fallback for PDF), then injected into the message |

Note: Document conversion requires markitdown (pip install markitdown[all]) or pypdf (pip install pypdf) for PDF/DOCX files. Text-based formats work out of the box with no extra dependencies.

Future plan: The underlying Anthropic API supports native PDF document blocks (preserving layout, tables, and embedded images). Once the pi-mono agent runtime adds DocumentContent support, PDF attachments will be upgraded to use native API handling instead of text extraction.

More

  • Document conversion — PDF / DOCX / PPTX / XLSX → Markdown (via agent tools)
  • Python data analysis — LLM-generated analysis with matplotlib/seaborn visualization
  • Artifact management — notes, papers, data, web content with CRUD tools
  • @-mention system — reference entities inline in chat
  • Session continuity — automatic context compaction and session summaries
  • Integrated terminal — run commands without leaving the app
  • LLM providers — OpenAI and Anthropic models supported

Prerequisites

  • Node.js >= 18
  • npm >= 9
  • Python 3 (optional, for data analysis and figure generation)
  • macOS recommended (Linux/Windows: use the git clone method below, untested)

Getting Started

Option A: Install via npm (recommended)

npm install -g research-copilot
research-copilot

Option B: Clone from source

git clone https://github.com/daidong/PiPilot.git
cd PiPilot
npm install
npm run dev

API Keys

On first launch, the app will prompt you to enter your API keys directly in the UI. Keys are saved to ~/.research-copilot/config.json.

You can also set them as environment variables in your shell profile (~/.zshrc, ~/.bashrc):

export ANTHROPIC_API_KEY="sk-ant-..."   # or OPENAI_API_KEY="sk-..."

See API Keys Setup above for the full list.

Build for Production

npm run build

Project Structure

app/                  # Electron desktop application
├── src/main/         # Main process (IPC handlers, app lifecycle)
├── src/preload/      # Context bridge (renderer ↔ main)
└── src/renderer/     # React UI (components, Zustand stores)

lib/                  # Research agent logic (framework-independent)
├── agents/           # Coordinator agent + prompt registry
├── commands/         # Artifact CRUD, search, enrichment
├── mentions/         # @-mention parsing and resolution
├── memory-v2/        # Artifact storage and session summaries
├── skills/           # Skills system (loader + builtin skills)
└── tools/            # Research tools (web, literature, data, convert)

shared-electron/      # Reusable Electron IPC utilities
shared-ui/            # Shared React components and stores

Adding Custom Skills

Create a Markdown file at <your-workspace>/.pi/skills/<name>/SKILL.md:

---
id: my-skill
name: My Skill
shortDescription: Brief description of what this skill does
---

Summary loaded at startup.

## Procedures
Detailed guidance loaded on demand when the skill is activated.

Skills are auto-discovered from three locations (later overrides earlier):

  1. lib/skills/builtin/ — shipped with the app
  2. ~/.research-pilot/skills/ — user-global
  3. <workspace>/.pi/skills/ — project-specific

Configuration

Research Copilot stores its data in the workspace under .research-pilot/:

.research-pilot/
├── artifacts/          # Notes, papers, data, web content
│   ├── notes/
│   ├── papers/
│   ├── data/
│   └── web-content/
└── memory-v2/
    └── session-summaries/

License

MIT