memord
v0.2.6
Published
A local shared memory layer for all your AI tools
Readme
Memord
Local shared memory for your AI coding tools.
Claude learns something. Cursor remembers it. Copilot picks it up. All locally. No cloud. No vendor lock-in.
Table of Contents
- The Problem
- The Solution
- Architecture
- Quick Start
- Quick Demo
- Features
- Supported Tools
- Example Workflow
- Why Memord vs. Alternatives
- Why Memord Exists
- Design Principles
- Roadmap
- Contributing
- License
The Problem
Every AI coding tool starts from zero.
You explain your architecture to Claude. Then you explain it again to Cursor. Then again to Copilot.
- Your tech stack preferences? Forgotten.
- Your project conventions? Gone after the session.
- Your constraints ("never use class components", "always use Zod for validation")? You repeat them every time.
Each tool operates in complete isolation. There is no shared context layer. No memory infrastructure.
Developers lose hours every week re-explaining the same things to different AI tools.
The Solution
Memord is a local shared memory layer that sits between your AI tools.
When one tool learns something about you or your project, it stores it in Memord. When another tool needs context, it queries Memord and gets relevant memories back — instantly, locally.
One store. All tools. Persistent context.
Architecture
┌─────────────┐
│ Claude Code │
└──────┬──────┘
│ remember()
▼
┌──────────┐ ┌─────────────┐ ┌───────────┐
│ Cursor │◄──────►│ Memord │◄──────►│ Windsurf │
└──────────┘ recall()│ (local DB) │recall()└───────────┘
└──────┬──────┘
│
┌──────┴──────┐
│ Copilot │
└─────────────┘Memord runs as a local daemon exposing two interfaces:
- MCP (stdio) — for Claude Code, Cursor, Windsurf, and any MCP-compatible tool
- HTTP API — for tools that use REST (Copilot, custom integrations)
Data stays on your machine. Always.
Quick Start
npx memord setupThat's it. The CLI auto-detects which AI tools you have installed and configures them in one step.
Restart your AI tools and Memord is active.
Quick Demo
After running setup, your AI tools call remember() and recall() automatically via MCP. You can also interact with Memord directly:
# Browse your memory store in the browser
npx memord http
# → opens dashboard at http://localhost:7432
# Or start both MCP + HTTP together
npx memord bothAsk your AI tool at the start of any session:
What do you know about me and my projects?It will call recall() and load your stored context automatically.
Features
- Shared memory across tools — Claude, Cursor, Copilot, Windsurf, and more share one local store
- Hybrid retrieval — combines vector similarity (e5-small-v2) + BM25 keyword search + recency decay for accurate recall
- Semantic deduplication — automatically merges near-identical memories, no duplicates
- Auto-tagging — extracts semantic keywords and topics at store time for better FTS recall
- Constraint boosting — "never do X" type memories always rank higher in retrieval
- Local-first — SQLite + ONNX embeddings, no API keys, no cloud
- Fast — sub-10ms queries on commodity hardware
- MCP + HTTP — works with any tool via Model Context Protocol or REST
- Optional dashboard — browse your memory store at
http://localhost:7432 - 26+ tool integrations — one setup command configures all your tools
Supported Tools
| Tool | Protocol | Status | |------|----------|--------| | Claude Code | MCP (stdio) | ✅ | | Claude Desktop | MCP (stdio) | ✅ | | Cursor | MCP (stdio) | ✅ | | Windsurf | MCP (stdio) | ✅ | | VS Code (Copilot) | MCP (stdio) | ✅ | | JetBrains IDEs | MCP (stdio) | ✅ | | Zed | MCP (stdio) | ✅ | | Warp Terminal | MCP (stdio) | ✅ | | Continue | MCP (stdio) | ✅ | | Cline | MCP (stdio) | ✅ | | Roo Code | MCP (stdio) | ✅ | | 5ire | MCP (stdio) | ✅ | | LM Studio | MCP (stdio) | ✅ | | Cherry Studio | MCP (stdio) | ✅ | | Kiro | MCP (stdio) | ✅ | | Amp | MCP (stdio) | ✅ | | Augment Code | MCP (stdio) | ✅ | | Gemini CLI | MCP (stdio) | ✅ | | Gemini Code Assist | MCP (stdio) | ✅ | | OpenAI Codex CLI | MCP (stdio) | ✅ | | Amazon Q CLI | MCP (stdio) | ✅ | | Visual Studio | MCP (stdio) | ✅ | | Neovim (mcphub.nvim) | MCP (stdio) | ✅ | | Goose | MCP (stdio) | ✅ | | GitHub Copilot | MCP (stdio) | ✅ | | Any MCP tool | MCP (stdio/http) | ✅ |
Example Workflow
1. Claude Code learns a convention
During your session, Claude notices you always use Zod for validation. It stores this automatically:
remember({
content: "Always use Zod for runtime validation. Never use joi or yup.",
type: "constraint",
importance: 0.9
})2. Memord stores and indexes it
The memory is embedded (vector), tagged, and indexed for FTS. It's stored in ~/.memord/memories.db.
3. Cursor retrieves it the next day
When you open Cursor on the same project, it queries Memord at session start:
recall({ query: "validation library preferences", limit: 5 })Memord returns the Zod constraint. Cursor now knows — without you saying anything.
Why Memord vs. Alternatives
There are other MCP memory tools. Here is how they differ.
| | mem0ai OpenMemory | CaviraOSS OpenMemory | Memorix | Memord |
|---|---|---|---|---|
| Embeddings | OpenAI API key required | OpenAI / Ollama | Optional (quality loss) | ONNX — no API key, no Ollama |
| Docker | Required | No | No | No |
| Setup | docker-compose up | clone + npm | clone + npm | npx memord setup |
| Tools supported | Claude + frameworks | ~10 IDEs | ~10 IDEs | 25+ IDEs |
| Retrieval | Vector only | Graph + temporal | Keyword-based | Hybrid: vector + BM25 + recency + importance |
| Deduplication | LLM-based | Cosine similarity | None | Cosine threshold (auto-merge) |
The short version:
mem0aiis powerful but requires Docker, PostgreSQL, Qdrant, and an OpenAI key just to get started.CaviraOSS/OpenMemoryis interesting but still needs Ollama or an API key for meaningful embeddings.Memorixskips embeddings by default, which degrades semantic recall quality.- Memord ships ONNX embeddings out of the box. No API key. No Docker. No Ollama. Just
npx memord setupand you are done.
Why Memord Exists
AI tools should not have isolated memory. Memory should be infrastructure.
Like Git is infrastructure for code, and Docker is infrastructure for environments — Memord is infrastructure for AI memory.
Right now, every AI tool reinvents the wheel. They have private, incompatible, cloud-dependent memory systems. Or none at all.
Memord takes a different approach:
- Open — any tool can integrate via MCP or HTTP
- Local — your data never leaves your machine
- Shared — one store, all tools, persistent context
This is what a memory layer for the AI-native development stack should look like.
Design Principles
Local-first Your memories are yours. Everything runs on your machine. No API keys, no subscriptions, no data leaving your laptop.
Tool-agnostic Memord does not care which AI tool you use. MCP and HTTP mean any tool can integrate in minutes.
Fast retrieval Hybrid search (vector + BM25 + recency) with MMR reranking. Sub-10ms on SQLite. No round-trips to external services.
Privacy-first
SQLite file with chmod 600. Localhost-only HTTP. No telemetry. No analytics.
Automatic quality Semantic deduplication, importance thresholds, and auto-tagging keep the memory store clean without user maintenance.
Roadmap
- [ ] Team memory — share a memory store across a dev team via a self-hosted sync layer
- [ ] Memory compression — periodically cluster and summarize old episodic memories
- [ ] Plugin API — standardized SDK for tool integrations
- [ ] Web UI improvements — visual memory graph, manual editing
- [ ] Named entity extraction — index people, projects, technologies as first-class entities
- [ ] Multi-user support — per-user isolation within a shared daemon
- [ ] VS Code extension — native GUI for browsing and managing memories
Contributing
Contributions are welcome. See CONTRIBUTING.md for how to get started.
The most valuable contributions right now:
- New tool integrations (open an issue first)
- Retrieval quality improvements
- Dashboard improvements
License
MIT © Joel van den Hoeven
See LICENSE.
