lgrep
v0.1.2
Published
Semantic code search CLI with local and Postgres-backed cloud profiles
Downloads
146
Maintainers
Readme
lgrep
AI-powered semantic code search. Find code by meaning, not just text.
Search for "authentication logic" and find OAuth handlers, JWT validation, and session management — even if those words never appear in the code. Built-in code intelligence finds dead code, circular dependencies, and shows blast radius before refactoring.
Install
Requires Node.js >= 18.17.
npm install -g lgrep
lgrep initlgrep init walks you through the two supported first-run paths:
Local- local index and local cacheCloud- Postgres-backed index and Postgres-backed cache
From Source
git clone https://github.com/dennisonbertram/lgrep && cd lgrep
npm install --legacy-peer-deps
npm run build
node dist/cli/index.js initQuick Start
lgrep init
lgrep doctor
lgrep list
lgrep search "user authentication logic"
lgrep search --usages "validateUser"
lgrep search --definition "UserService"
lgrep context "add rate limiting"
lgrep intent "what calls awardBadge"Agent Integration
lgrep install --target claude
lgrep install --target codex
lgrep install --target mcp
lgrep install --target allTargets:
claudeinstalls the Claude skill and SessionStart hookcodexwrites project guidance intoAGENTS.mdmcpconfigures lgrep as an MCP serverallinstalls all three
In local mode, the SessionStart hook can start local watchers automatically. In cloud mode, the hook exits immediately and relies on the shared remote index.
Commands
Core
| Command | Purpose |
|---------|---------|
| lgrep index <path> | Index a directory (--update, --force, --name) |
| lgrep search <query> | Semantic search (--usages, --definition, --type) |
| lgrep context <task> | Build context for LLM tasks (--max-tokens, --depth) |
| lgrep intent <prompt> | Natural language command routing |
| lgrep list | List all indexes |
| lgrep watch <path> | Auto-update index on file changes |
| lgrep stop <name> | Stop a watcher |
| lgrep delete <name> | Delete an index |
| lgrep clean | Remove failed/stale/zombie indexes |
| lgrep init | Guided setup for local or cloud profiles |
| lgrep profile | Manage named local/cloud profiles |
Code Intelligence
| Command | Purpose |
|---------|---------|
| lgrep dead | Functions with zero callers |
| lgrep similar | Duplicated function bodies |
| lgrep cycles | Circular dependency chains |
| lgrep unused-exports | Exported but never imported symbols |
| lgrep breaking | Calls with mismatched argument counts |
| lgrep rename <old> <new> | Preview rename impact |
| lgrep callers <symbol> | All callers of a function |
| lgrep deps <module> | Module dependency graph |
| lgrep impact <symbol> | Blast radius of a change |
Analysis & Exploration
| Command | Purpose |
|---------|---------|
| lgrep graph | Visualize dependencies in a web UI (--mode calls\|deps) |
| lgrep analyze <path> | One-off code structure analysis (--symbols, --deps, --calls) |
| lgrep symbols [query] | Quick symbol lookup (-k function, -f auth.ts) |
| lgrep explain <target> | AI-powered explanation of a file or symbol |
| lgrep stats | Index statistics |
| lgrep logs | Watcher daemon logs (-f to follow) |
| lgrep daemon | Manage in-memory query daemons (start\|stop\|list) |
All commands support --json for scripting. Most support -i, --index and -l, --limit.
Embedding Providers
| Provider | Speed | Best For | Setup |
|----------|-------|----------|-------|
| OpenAI | ~50ms | General (recommended) | OPENAI_API_KEY |
| Voyage | ~100ms | Code search | VOYAGE_API_KEY |
| Cohere | ~50ms | Multilingual | COHERE_API_KEY |
| Ollama | ~1-5s | Privacy, offline | lgrep init |
lgrep config model auto # auto-detect (default)
lgrep config model voyage:voyage-code-3 # explicitLLM Providers (Summarization)
Auto-detected. Priority: Groq > Anthropic > OpenAI > Ollama.
lgrep config summarizationModel auto # default
lgrep config summarizationModel groq:llama-3.1-8b-instant # explicitProject Config
Create .lgrep.json in your repo root to skip --index flags:
{
"index": "my-project",
"root": "src"
}Remote Storage
For cloud mode, lgrep defaults to Postgres for both the index and the cache. S3/R2 is still supported as an advanced/manual path, but it is no longer the default onboarding route.
See docs/guides/remote-storage.md for setup.
Programmatic API
import { createEmbeddingClient, createAIProvider, detectBestProvider } from 'lgrep';
const embedder = createEmbeddingClient({ model: 'auto' });
const { embeddings } = await embedder.embed(['hello world']);
const ai = createAIProvider({ model: detectBestProvider() });
const explanation = await ai.generateText('Explain this code...');Configuration
By default, lgrep reads config from the active profile. You can manage profiles with lgrep profile.
lgrep profile list
lgrep profile create cloud
lgrep profile use cloud
lgrep config # show all settings
lgrep config model # get one
lgrep config model auto # set one
lgrep doctor # check everythingYou can still override everything with LGREP_HOME for a one-off isolated home:
LGREP_HOME="$HOME/Library/Application Support/lgrep-local" lgrep doctor
LGREP_HOME="$HOME/Library/Application Support/lgrep-local" lgrep index . --name my-projectLicense
MIT
Contributing
git clone https://github.com/dennisonbertram/lgrep && cd lgrep
npm install --legacy-peer-deps
npm run build
npm testMaintainers: see docs/guides/releasing.md for npm release setup and the tag-based publish flow.
