local-context-mcp
v0.1.9
Published
Local semantic code search via MCP - works offline, indexes current directory
Readme
local-context-mcp
A fully local-first semantic code search engine exposed via MCP (Model Context Protocol).
Features
- Local-first: Works entirely offline using USearch as the vector database
- Zero-config: Automatically targets the current working directory
- Submodule-aware: When working inside a git submodule, automatically indexes from the meta-repo root
- MCP-based: Integrates with Claude Code, OpenCode, and other MCP-compatible tools
- Flexible embeddings: Supports Ollama (default), OpenAI, Gemini, and VoyageAI
- Fast: In-memory vector search with optional persistence
Quick Start
npx local-context-mcpThis indexes the current directory and starts the MCP server. First run may take a few minutes depending on codebase size.
Command Line Options
local-context-mcp --path /path/to/index # Directory to index
local-context-mcp --watch # Watch mode: auto-reindex on file changes
local-context-mcp --help # Show helpOr set environment variable:
LOCAL_CONTEXT_PATH=/path/to/index npx local-context-mcpWatch Mode
Use --watch to automatically reindex changed files in real-time:
local-context-mcp --watch --path /path/to/projectThe MCP server starts immediately and the initial index runs in the background. After that, any file changes trigger an incremental reindex within 2 seconds. This is the recommended mode for active development sessions.
In watch mode:
- Initial index runs in background if no index exists
- Incremental updates only re-embed changed files (fast)
- Debounced at 2 seconds to batch rapid saves from editors
- Supports file additions, modifications, and deletions
MCP Integration
Claude Code / Claude CLI
claude mcp add local-context -- npx -y local-context-mcpOr if installed globally:
claude mcp add local-context -- local-context-mcpOpenCode
Add to your OpenCode settings (~/.opencode/settings.json or project config):
{
"mcpServers": {
"local-context": {
"command": "npx",
"args": ["-y", "local-context-mcp", "--watch"],
"env": {
"LOCAL_CONTEXT_PATH": "${workspaceFolder}"
}
}
}
}Or using the global binary (if installed):
{
"mcpServers": {
"local-context": {
"command": "local-context-mcp"
}
}
}Cursor
- Open Cursor Settings (⌘, or Ctrl+,)
- Go to Extensions → MCP
- Click Add new MCP server
- Configure:
- Name:
local-context - Command:
npx - Arguments:
-y local-context-mcp --watch - Environment variables (optional):
LOCAL_CONTEXT_PATH=/path/to/your/project
- Name:
Or add to cursor settings JSON:
{
"mcpServers": {
"local-context": {
"command": "npx",
"args": ["-y", "local-context-mcp", "--watch"],
"env": {
"LOCAL_CONTEXT_PATH": "${workspaceFolder}"
}
}
}
}Windsurf (Codeium)
- Open Windsurf Settings
- Go to Extensions → MCP
- Add new server with:
- Command:
npx - Arguments:
-y local-context-mcp --watch
- Command:
Other MCP Clients
For other MCP-compatible tools, add the server with:
- npx:
npx -y local-context-mcp - Global:
local-context-mcp - Docker:
docker run local-context-mcp
Set LOCAL_CONTEXT_PATH environment variable to specify which directory to index.
Git Submodules
When running from inside a git submodule, the tool automatically detects the submodule and resolves to the superproject (meta-repo) root for indexing and searching. This means:
- All code in the meta-repo is indexed, including all submodules
- Search results span the entire project, not just the current submodule
- No configuration needed — detection is automatic using
git rev-parse --show-superproject-working-tree - Falls back gracefully if not in a git repo or git version is too old
Installation
npm (Recommended)
npm install -g local-context-mcpOr use directly with npx:
npx local-context-mcpBuild from Source
npm install
npm run build
npm link # Links globally for CLI useTools
| Tool | Description |
|------|-------------|
| reindex | Index the current codebase for semantic search. Use after adding/changing files. |
| search | Search the indexed codebase using natural language. Returns code implementations first. |
| status | Get current indexing status (file count, chunk count). |
Search Example
> search: "async insert function"Returns relevant code snippets with file paths and line numbers. Results prioritize code implementations over documentation.
How It Works
Files → AST Parser → Chunks → Embeddings → USearch → Retrieval- File Discovery: Scans directory for code files (respects
.gitignoreand.contextignore) - Chunking: Splits code into semantic chunks using tree-sitter AST parsing
- Embedding: Generates vector embeddings for each chunk
- Indexing: Stores vectors in USearch for fast similarity search
- Retrieval: Finds most relevant chunks using hybrid scoring:
- 50% semantic similarity (embedding match)
- 25% keyword matching (query identifiers in code)
- 15% file type (code files prioritized over docs)
- 10% chunk type (functions/methods prioritized)
Storage
Index files are created in the current working directory:
usearch_index_<collection>.usearch # Vector index
usearch_meta_<collection>.json # Document metadata
usearch_coll_<collection>.json # Collection metadataCollection name is derived from the directory path hash.
Supported Languages
TypeScript, JavaScript, Python, Java, C/C++, C#, Go, Rust, PHP, Ruby, Swift, Kotlin, Scala, Objective-C, Markdown, Jupyter
Environment Variables
Embedding Provider Selection
Providers are checked in this order. Set any API key to use that provider:
| Variable | Provider | Default Model |
|----------|----------|---------------|
| OLLAMA_BASE_URL | Ollama | nomic-embed-text |
| OPENAI_API_KEY | OpenAI | text-embedding-3-small |
| GEMINI_API_KEY | Gemini | text-embedding-004 |
| VOYAGE_API_KEY | VoyageAI | voyage-3 |
Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434 # Default: http://127.0.0.1:11434
OLLAMA_MODEL=nomic-embed-text # Default: nomic-embed-textOpenAI Configuration
OPENAI_API_KEY=sk-...
OPENAI_EMBEDDING_MODEL=text-embedding-3-small # Default
OPENAI_BASE_URL=https://api.openai.com/v1 # Optional: for proxiesGemini Configuration
GEMINI_API_KEY=...
GEMINI_EMBEDDING_MODEL=text-embedding-004 # DefaultVoyageAI Configuration
VOYAGE_API_KEY=...
VOYAGE_EMBEDDING_MODEL=voyage-3 # DefaultIgnore Patterns
Files matching these patterns are excluded from indexing:
node_modules/**,dist/**,build/**,out/**.git/**,.svn/**,.hg/***.log,*.min.js,*.min.css,.env*.map,*.bundle.js,*.chunk.js
Additionally respects .gitignore and .contextignore in the project root.
Architecture
See docs/architecture.md for system design details.
MCP Protocol
See docs/mcp.md for tool schemas and protocol details.
Building from Source
npm install
npm run buildRun directly:
npx tsx src/index.tsSecurity Note
This package connects to localhost only (127.0.0.1:11434) when using the default Ollama embedding provider. No external network connections are made unless you configure a remote embedding provider (OpenAI, Gemini, or VoyageAI).
The npm audit warning about "supply chain risk" for http://127.0.0.1:11434 is a false positive - this is the local Ollama server endpoint, not an external URL.
License
MIT License. See LICENSE for details.
Based on claude-context by Zilliz.
