@rafaelsene01/contextual
v1.0.7
Published
Generate context-aware prompts from your codebase using semantic embeddings — works with LM Studio, Ollama, OpenAI, Google, Mistral and more.
Maintainers
Readme
contextual
Generate context-aware prompts from your codebase using semantic embeddings — paste the output into any LLM and get answers that actually understand your code.
What is this?
contextual is a CLI tool that scans your entire codebase, creates semantic embeddings of all files, and when you describe a task, it finds the most relevant code chunks and assembles them into a rich Markdown prompt ready to paste into ChatGPT, Claude, Gemini, or any other LLM.
No more copy-pasting files manually. Just describe what you want:
contextual "implement JWT authentication"...and a contextual.md file is generated with the most relevant parts of your codebase as context, formatted for optimal LLM understanding.
Key Features
- 🔍 Semantic search — finds code by meaning, not just keywords
- ⚡ Smart cache — only re-embeds files that changed (MD5 hash-based)
- 🏠 Local-first — works with LM Studio and Ollama (no API key needed)
- ☁️ Cloud providers — OpenAI, Google Gemini, Mistral supported
- 📄 PDF support — indexes documentation and specs alongside code
- 🎛️ Global config — set your preferred embedder once, use everywhere
- 🌲 Gitignore-aware — respects
.gitignoreautomatically
Installation
npm install -g @rafaelsene01/contextualRequires Node.js 18+
Quick Start
1. Choose your embedder
Option A — Local (no API key needed):
Install LM Studio or Ollama, then load a nomic-embed-text model and keep the server running.
Option B — Cloud:
Set your API key as an environment variable:
# OpenAI
export OPENAI_API_KEY=sk-...
# Google Gemini
export GOOGLE_API_KEY=AI...
# Mistral
export MISTRAL_API_KEY=...2. Configure (optional, but recommended)
Run the interactive wizard once:
contextual configThis saves your settings to ~/.cache/contextual/config.json and applies them to every project automatically.
3. Generate a prompt
Inside your project directory:
contextual "fix the CORS bug in the API middleware"A contextual.md file is created. Open it, copy the contents, and paste into your favorite LLM.
Commands
contextual <task> (default)
Generates a context-enriched prompt from your codebase.
contextual "add input validation to the user registration form"
contextual "why is the login redirect not working?" --output debug-login.md
contextual "refactor the database layer" --top-k 20 --min-score 0.25| Flag | Default | Description |
|------|---------|-------------|
| -o, --output <file> | contextual.md | Output file name |
| -d, --dir <directory> | cwd | Project directory to scan |
| --embedder <provider:model> | lmstudio:text-embedding-nomic-embed-text-v1.5 | Embedder to use |
| --top-k <number> | 10 | Max relevant chunks to include |
| --min-score <number> | 0.3 | Minimum cosine similarity (0–1) |
| --chunk-size <number> | 600 | Characters per chunk |
| --chunk-overlap <number> | 80 | Overlap between chunks |
| --max-file-size-kb <number> | 500 | Max file size to index (0 = no limit) |
| -f, --force | false | Force re-embedding, ignore cache |
| --no-cache | — | Disable cache completely |
contextual config
Interactive wizard to set global defaults.
contextual config # set preferences
contextual config --show # view current config
contextual config --reset # remove configSupported Providers
| Provider | Embeddings | Notes |
|----------|-----------|-------|
| lmstudio | ✅ | Local · http://localhost:1234 · No API key |
| ollama | ✅ | Local · http://localhost:11434 · No API key |
| openai | ✅ | Cloud · OPENAI_API_KEY |
| google | ✅ | Cloud · GOOGLE_API_KEY |
| mistral | ✅ | Cloud · MISTRAL_API_KEY |
| anthropic | ❌ | No embeddings API |
| openrouter | ❌ | No embeddings API |
Provider format
Use the provider:model format in --embedder:
contextual "my task" --embedder lmstudio:text-embedding-nomic-embed-text-v1.5
contextual "my task" --embedder ollama:nomic-embed-text
contextual "my task" --embedder openai:text-embedding-3-small
contextual "my task" --embedder openai:text-embedding-3-large
contextual "my task" --embedder google:text-embedding-004
contextual "my task" --embedder mistral:mistral-embedHow It Works
Your project files
│
▼
[1] .gitignore check — ensures output file is ignored
│
▼
[2] File scanner — collects all text/code files recursively
│
▼
[3] Embedder — chunks each file and embeds with your chosen model
│ (cached by MD5 hash — only changed files are re-embedded)
│
▼
[4] Semantic search — embeds your query, ranks chunks by cosine similarity
│
▼
[5] Prompt builder — assembles top-K chunks into structured Markdown
│
▼
contextual.md → paste into any LLMCache mechanism
Embeddings are stored in .contextual/temp/ as JSON files named after the MD5 hash of each file's content. Only modified files are re-embedded on subsequent runs — making repeat queries very fast.
File Types Indexed
The scanner automatically indexes source code, config files, and documentation:
- Code:
.js,.ts,.py,.go,.rs,.java,.kt,.rb,.php,.c,.cpp,.cs,.swift, and more - Web:
.html,.css,.scss,.vue,.svelte - Config:
.json,.yaml,.toml,.env.example - Docs:
.md,.mdx,.txt,.rst - Infra:
.sh,.sql,.graphql,.tf,.proto - Special:
Dockerfile,Makefile,Procfile,LICENSE, and others
Output Format
The generated contextual.md contains:
- 🎯 Task — your original description
- 📁 Code Context — relevant file chunks with similarity scores and syntax highlighting
- 🤖 Assistant Instructions — system prompt guiding the LLM to use only the provided context
Global Config
Settings are stored at ~/.cache/contextual/config.json:
{
"embedder": "lmstudio:text-embedding-nomic-embed-text-v1.5",
"minScore": 0.3,
"topK": 10,
"chunkSize": 600,
"chunkOverlap": 80,
"maxFileSizeKb": 500,
"output": "contextual.md"
}CLI flags always take priority over global config.
Examples
# Debug a production issue
contextual "users get logged out randomly after 5 minutes"
# Add a new feature
contextual "add dark mode toggle to the settings page" --output feature-dark-mode.md
# Understand unfamiliar code
contextual "explain how the payment flow works" --top-k 15
# Use OpenAI embeddings
contextual "optimize database queries" --embedder openai:text-embedding-3-small
# Scan a different project
contextual "add unit tests for the auth module" --dir ../my-other-project
# Force fresh embeddings (ignore cache)
contextual "my task" --forceRequirements
- Node.js 18 or higher
- An embedding server (one of):
License
MIT © rafaelsene01
