npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

easy-rag-cli

v1.0.2

Published

Zero-config RAG for any codebase or document folder. Index your project and ask questions via CLI, browser, or code.

Downloads

294

Readme

⚡ easy-rag-cli

Zero-config RAG for any codebase or document folder.

Install it, index your project, and start asking questions in plain English — via CLI, browser, or code.


Install

npm install easy-rag-cli
# or globally
npm install -g easy-rag-cli

Quick Start

# 1. Create a config file (optional — works with defaults too)
npx easy-rag-cli init

# 2. Set your API key
export OPENAI_API_KEY=sk-...

# 3. Index your project
npx easy-rag-cli index

# 4. Ask a question
npx easy-rag-cli ask "How does authentication work in this project?"

# 5. Or open the browser UI
npx easy-rag-cli serve

Commands

| Command | Description | |---|---| | easy-rag-cli init | Create easy-rag-cli.config.json | | easy-rag-cli config | Interactive provider/model setup wizard | | easy-rag-cli config --show | Print current configuration | | easy-rag-cli config --set key=value | Set a single config value | | easy-rag-cli index | Scan & embed files into local vector store | | easy-rag-cli index --full | Force full re-index (ignore cache) | | easy-rag-cli ask "question" | Ask a question via CLI | | easy-rag-cli ask -i | Interactive Q&A session | | easy-rag-cli serve | Start web UI at localhost:3141 | | easy-rag-cli serve --port 8080 | Start web UI on custom port | | easy-rag-cli status | Show index stats |


Programmatic API

import { index, ask, askStream, searchChunks } from 'easy-rag-cli';

// Index the codebase
await index({
  onProgress: ({ stage, done, total }) => console.log(`${stage}: ${done}/${total}`)
});

// Ask a question (returns full answer)
const { answer, sources } = await ask('What does the main function do?');
console.log(answer);
console.log('Sources:', sources);

// Stream the answer
for await (const delta of askStream('Explain the folder structure')) {
  process.stdout.write(delta);
}

// Just search for relevant chunks
const chunks = await searchChunks('database connection', { topK: 3 });
chunks.forEach(c => console.log(c.filePath, c.score));

Configuration

easy-rag-cli.config.json:

{
  "provider": "openai",
  "openai": {
    "apiKey": "sk-...",
    "embeddingModel": "text-embedding-3-small",
    "chatModel": "gpt-4o-mini"
  },
  "ollama": {
    "baseUrl": "http://localhost:11434",
    "embeddingModel": "nomic-embed-text",
    "chatModel": "llama3"
  },
  "index": {
    "include": ["**/*.js", "**/*.ts", "**/*.md", "**/*.pdf"],
    "exclude": ["**/node_modules/**", "**/.git/**"],
    "chunkSize": 500,
    "chunkOverlap": 50,
    "maxFileSize": 500000
  },
  "serve": {
    "port": 3141,
    "openBrowser": true
  }
}

Quick config via CLI

# Switch to Ollama
npx easy-rag-cli config --set provider=ollama

# Set OpenAI key
npx easy-rag-cli config --set openai.key=sk-abc123

# Set Ollama URL
npx easy-rag-cli config --set ollama.url=http://localhost:11434

# Set chat model
npx easy-rag-cli config --set ollama.chat=mistral

# View everything
npx easy-rag-cli config --show

Using Ollama (local, free)

{
  "provider": "ollama",
  "ollama": {
    "baseUrl": "http://localhost:11434",
    "embeddingModel": "nomic-embed-text",
    "chatModel": "llama3"
  }
}

Make sure Ollama is running and the models are pulled:

ollama pull nomic-embed-text
ollama pull llama3

What gets indexed?

By default, easy-rag-cli indexes:

  • Source code: .js, .ts, .jsx, .tsx, .py, .go, .rs, .java, .c, .cpp
  • Docs: .md, .txt, .pdf
  • Config: .json, .yaml, .yml, .html, .css, .sh

Files in node_modules, .git, dist, build are skipped automatically.


How it works

  1. Scan — globs your project for matching files
  2. Chunk — splits code at function/class boundaries, docs at paragraph boundaries
  3. Embed — generates vector embeddings via OpenAI or Ollama
  4. Index — builds a hybrid BM25 + vector store with symbol graph locally (no external DB)
  5. Search — fuses vector + keyword results via Reciprocal Rank Fusion, expands with import graph
  6. Answer — passes top chunks as context to the LLM and streams the answer

License

MIT