npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@rafaelsene01/contextual

v1.0.7

Published

Generate context-aware prompts from your codebase using semantic embeddings — works with LM Studio, Ollama, OpenAI, Google, Mistral and more.

Readme

contextual

Generate context-aware prompts from your codebase using semantic embeddings — paste the output into any LLM and get answers that actually understand your code.

npm version License: MIT Node.js


What is this?

contextual is a CLI tool that scans your entire codebase, creates semantic embeddings of all files, and when you describe a task, it finds the most relevant code chunks and assembles them into a rich Markdown prompt ready to paste into ChatGPT, Claude, Gemini, or any other LLM.

No more copy-pasting files manually. Just describe what you want:

contextual "implement JWT authentication"

...and a contextual.md file is generated with the most relevant parts of your codebase as context, formatted for optimal LLM understanding.

Key Features

  • 🔍 Semantic search — finds code by meaning, not just keywords
  • Smart cache — only re-embeds files that changed (MD5 hash-based)
  • 🏠 Local-first — works with LM Studio and Ollama (no API key needed)
  • ☁️ Cloud providers — OpenAI, Google Gemini, Mistral supported
  • 📄 PDF support — indexes documentation and specs alongside code
  • 🎛️ Global config — set your preferred embedder once, use everywhere
  • 🌲 Gitignore-aware — respects .gitignore automatically

Installation

npm install -g @rafaelsene01/contextual

Requires Node.js 18+


Quick Start

1. Choose your embedder

Option A — Local (no API key needed):

Install LM Studio or Ollama, then load a nomic-embed-text model and keep the server running.

Option B — Cloud:

Set your API key as an environment variable:

# OpenAI
export OPENAI_API_KEY=sk-...

# Google Gemini
export GOOGLE_API_KEY=AI...

# Mistral
export MISTRAL_API_KEY=...

2. Configure (optional, but recommended)

Run the interactive wizard once:

contextual config

This saves your settings to ~/.cache/contextual/config.json and applies them to every project automatically.

3. Generate a prompt

Inside your project directory:

contextual "fix the CORS bug in the API middleware"

A contextual.md file is created. Open it, copy the contents, and paste into your favorite LLM.


Commands

contextual <task> (default)

Generates a context-enriched prompt from your codebase.

contextual "add input validation to the user registration form"
contextual "why is the login redirect not working?" --output debug-login.md
contextual "refactor the database layer" --top-k 20 --min-score 0.25

| Flag | Default | Description | |------|---------|-------------| | -o, --output <file> | contextual.md | Output file name | | -d, --dir <directory> | cwd | Project directory to scan | | --embedder <provider:model> | lmstudio:text-embedding-nomic-embed-text-v1.5 | Embedder to use | | --top-k <number> | 10 | Max relevant chunks to include | | --min-score <number> | 0.3 | Minimum cosine similarity (0–1) | | --chunk-size <number> | 600 | Characters per chunk | | --chunk-overlap <number> | 80 | Overlap between chunks | | --max-file-size-kb <number> | 500 | Max file size to index (0 = no limit) | | -f, --force | false | Force re-embedding, ignore cache | | --no-cache | — | Disable cache completely |

contextual config

Interactive wizard to set global defaults.

contextual config          # set preferences
contextual config --show   # view current config
contextual config --reset  # remove config

Supported Providers

| Provider | Embeddings | Notes | |----------|-----------|-------| | lmstudio | ✅ | Local · http://localhost:1234 · No API key | | ollama | ✅ | Local · http://localhost:11434 · No API key | | openai | ✅ | Cloud · OPENAI_API_KEY | | google | ✅ | Cloud · GOOGLE_API_KEY | | mistral | ✅ | Cloud · MISTRAL_API_KEY | | anthropic | ❌ | No embeddings API | | openrouter | ❌ | No embeddings API |

Provider format

Use the provider:model format in --embedder:

contextual "my task" --embedder lmstudio:text-embedding-nomic-embed-text-v1.5
contextual "my task" --embedder ollama:nomic-embed-text
contextual "my task" --embedder openai:text-embedding-3-small
contextual "my task" --embedder openai:text-embedding-3-large
contextual "my task" --embedder google:text-embedding-004
contextual "my task" --embedder mistral:mistral-embed

How It Works

Your project files
       │
       ▼
 [1] .gitignore check — ensures output file is ignored
       │
       ▼
 [2] File scanner — collects all text/code files recursively
       │
       ▼
 [3] Embedder — chunks each file and embeds with your chosen model
       │          (cached by MD5 hash — only changed files are re-embedded)
       │
       ▼
 [4] Semantic search — embeds your query, ranks chunks by cosine similarity
       │
       ▼
 [5] Prompt builder — assembles top-K chunks into structured Markdown
       │
       ▼
  contextual.md  →  paste into any LLM

Cache mechanism

Embeddings are stored in .contextual/temp/ as JSON files named after the MD5 hash of each file's content. Only modified files are re-embedded on subsequent runs — making repeat queries very fast.


File Types Indexed

The scanner automatically indexes source code, config files, and documentation:

  • Code: .js, .ts, .py, .go, .rs, .java, .kt, .rb, .php, .c, .cpp, .cs, .swift, and more
  • Web: .html, .css, .scss, .vue, .svelte
  • Config: .json, .yaml, .toml, .env.example
  • Docs: .md, .mdx, .txt, .rst
  • Infra: .sh, .sql, .graphql, .tf, .proto
  • Special: Dockerfile, Makefile, Procfile, LICENSE, and others

Output Format

The generated contextual.md contains:

  1. 🎯 Task — your original description
  2. 📁 Code Context — relevant file chunks with similarity scores and syntax highlighting
  3. 🤖 Assistant Instructions — system prompt guiding the LLM to use only the provided context

Global Config

Settings are stored at ~/.cache/contextual/config.json:

{
  "embedder": "lmstudio:text-embedding-nomic-embed-text-v1.5",
  "minScore": 0.3,
  "topK": 10,
  "chunkSize": 600,
  "chunkOverlap": 80,
  "maxFileSizeKb": 500,
  "output": "contextual.md"
}

CLI flags always take priority over global config.


Examples

# Debug a production issue
contextual "users get logged out randomly after 5 minutes"

# Add a new feature
contextual "add dark mode toggle to the settings page" --output feature-dark-mode.md

# Understand unfamiliar code
contextual "explain how the payment flow works" --top-k 15

# Use OpenAI embeddings
contextual "optimize database queries" --embedder openai:text-embedding-3-small

# Scan a different project
contextual "add unit tests for the auth module" --dir ../my-other-project

# Force fresh embeddings (ignore cache)
contextual "my task" --force

Requirements

  • Node.js 18 or higher
  • An embedding server (one of):
    • LM Studio with a loaded embedding model (local, free)
    • Ollama with ollama pull nomic-embed-text (local, free)
    • OpenAI / Google / Mistral API key (cloud)

License

MIT © rafaelsene01