obedding
v1.0.2
Published
Semantic search for Obsidian notes using local embeddings (Ollama, LM Studio, MLX)
Maintainers
Readme
obedding
Semantic search for your Obsidian vault using local embeddings
│ │
▼ ▼
┌─────────────┐ ┌──────────────┐
│ Obsidian │ │ obedding │
│ Vault │─────▶│ │
│ │ │ ● Index │
│ *.md files │ │ ● Search │
│ │ │ ● Stats │
└─────────────┘ └──────┬───────┘
│
┌───────────────┼───────────────┐
│ │ │
┌─────▼─────┐ ┌─────▼─────┐ ┌────▼─────┐
│ LM Studio │ │ Ollama │ │ MLX │
│ (DEFAULT) │ │ │ │ (NOT REC)│
│ :1234 │ │ :11434 │ │ :28100 │
└───────────┘ └───────────┘ └──────────┘✨ Features
- 🔍 Semantic Search — Find notes by meaning, not just keywords
- 🏠 Local & Private — Everything runs on your machine, no external API calls
- 🔄 Multiple Backends — LM Studio (default), Ollama, or MLX
- ⚡ Incremental Indexing — Only re-index changed files
- 📝 YAML Support — Extracts metadata from frontmatter
🚀 Quick Start
1. Install a Backend
Choose one — LM Studio is recommended:
LM Studio (GUI, easy):
# Download from https://lmstudio.ai/
# Load an embedding model and start the serverOllama (CLI, simple):
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull qwen3-embedding:0.6b
ollama serve2. Index Your Notes
npx obedding index --vault ~/.obsidian/Projects3. Search
npx obedding search "what did we decide about caching?"📖 Usage
# Index notes
npx obedding index --vault ~/.obsidian/Projects
# Incremental indexing (faster)
npx obedding index --vault ~/.obsidian/Projects --incremental
# Semantic search
npx obedding search "database optimization"
# Top 5 results
npx obedding search "API design" --top-k 5
# Filter by relevance
npx obedding search "redis" --min-score 0.6
# JSON output
npx obedding search "architecture" --json
# Show statistics
npx obedding stats
# Clear all embeddings
npx obedding clear --force🎯 Backend Options
| Backend | Server | Model | Dims | Notes |
|---------|--------|-------|------|-------|
| LM Studio --backend lmstudio | :1234 | text-embedding-qwen3-embedding-0.6b | 1024 | ✅ Default, GUI |
| Ollama --backend ollama | :11434 | qwen3-embedding:0.6b | 768 | ✅ Stable |
| MLX --backend mlx | :28100 | Qwen3-Embedding-0.6B-4bit-DWQ | 2048 | ⚠️ Known issues |
📁 YAML Frontmatter
---
type: cli
repo: my-repo
context: feature-x
tags: [decision, architecture]
title: Cache Strategy Decision
---📚 Documentation
- Architecture — System design and components
- Backends — Backend setup and comparison
- API Reference — Complete CLI documentation
- CLAUDE.md — Project context for Claude Code
🛠️ Development
git clone https://github.com/tuannvm/obedding.git
cd obedding
npm install
npm run build
npm link
obedding search "test"❓ Troubleshooting
Server not running?
# LM Studio
curl http://localhost:1234/v1/models
# Ollama
curl http://localhost:11434/api/tagsNo results?
# Check stats
npx obedding stats
# Re-index
npx obedding index --vault ~/.obsidian/Projects📄 License
MIT
