@coreidentitylabs/open-graph-memory-mcp
v1.0.3
Published
Graph-based agent memory MCP server — extends AI coding assistant memory with persistent knowledge graphs
Downloads
261
Maintainers
Readme
Open-Memory MCP Server
Graph-based agent memory for AI coding assistants — extends context windows by storing entities, relationships, and decisions in a persistent knowledge graph.
Problem
AI coding assistants (Google Antigravity, VS Code GitHub Copilot) lose context as conversations grow. Developers repeatedly re-explain code architecture and past decisions. Open-Memory solves this by giving your AI a persistent, structured memory.
How It Works
Agent-Driven Flow (no API key needed)
You chat with your AI assistant
↓ Agent extracts entities/decisions from conversation
↓ Calls memory_add_entities / memory_add_relations
↓ Entities stored in knowledge graph (JSON or Neo4j)
↓ Before next task, agent calls memory_search or memory_get_context
↓ Before complex tasks, agent calls memory_deep_analyze for rich multi-pass context
↓ Relevant historical context injected into prompt
= AI remembers your project across sessionsServer-Side Encoding Flow (optional, requires LLM API key)
You pass raw text to memory_encode_text
↓ Server-side LLM extracts entities + relationships automatically
↓ Entity resolution against existing graph (dedup)
↓ LLM-quality embeddings generated
↓ Nodes + edges stored
= Fully automated — no manual entity extraction neededInstallation & Usage
You can run Open-Memory directly without manual installation using npx. This is the recommended way for both VS Code and Claude Desktop.
1. VS Code (via MCP Extension)
- Install the MCP extension for VS Code.
- Open the extension settings or click the MCP icon in the status bar.
- Click "Add MCP Server" and enter:
- Command:
npx - Arguments:
-y @coreidentitylabs/open-graph-memory-mcp(ornpx -y github:YOUR_USERNAME/open-memoryif not on NPM yet)
- Command:
2. Claude Desktop / Antigravity Desktop
Add the following to your MCP configuration file (e.g., %APPDATA%\Claude\claude_desktop_config.json or config.json for Antigravity):
{
"mcpServers": {
"open-memory": {
"command": "npx",
"args": ["-y", "@coreidentitylabs/open-graph-memory-mcp"],
"env": {
"STORAGE_BACKEND": "json",
"MEMORY_STORE_PATH": "C:/path/to/your/memory.json"
}
}
}
}3. Manual Development Setup
If you want to contribute or run from source:
# Clone and install dependencies
git clone https://github.com/YOUR_USERNAME/open-memory.git
cd open-memory
npm install
# Build
npm run build
# Run (stdio mode)
node dist/index.jsEnvironment Variables
# Storage backend: "json" (default) or "neo4j"
STORAGE_BACKEND=json
MEMORY_STORE_PATH=./memory.json
# Neo4j (only if STORAGE_BACKEND=neo4j)
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=password
# Optional: Server-side encoding (memory_encode_text tool)
# Works with OpenAI, Azure OpenAI, Ollama (OpenAI-compat), etc.
# LLM_API_KEY=sk-...
# LLM_BASE_URL=https://api.openai.com/v1
# LLM_CHAT_MODEL=gpt-4o-mini
# LLM_EMBEDDING_MODEL=text-embedding-3-smallMCP Client Configuration
Add to your MCP config (e.g. mcp_config.json):
{
"mcpServers": {
"open-memory": {
"command": "node",
"args": ["d:/Projects/open-memory/dist/index.js"],
"env": {
"STORAGE_BACKEND": "json",
"MEMORY_STORE_PATH": "./memory.json"
}
}
}
}Real-World Applications & Use Cases
Open-Memory is designed to solve specific challenges in complex, distributed, and fast-moving development environments:
1. Cross-Repository Microservices
In microservice architectures, knowledge is often fragmented across multiple repositories. Open-Memory acts as a decentralized knowledge bridge, allowing AI agents to:
- Track Cross-Service Dependencies: Remember how service A interacts with service B via specific API contracts or message schemas.
- Maintain Architectural Consistency: Store system-wide design patterns and security standards that apply to all repositories.
- Unified Onboarding: Help new developers (and AI agents) understand the "big picture" by querying a persistent graph of how various services fit together.
2. Guiding AI Agents with Full Context
Traditional context windows are transient. By using a persistent memory, you can "train" your AI agents over time:
- Persistent Logic & Decisions: Store the "why" behind past architectural choices so the AI doesn't suggest reverting them in future sessions.
- Project-Specific Knowledge: Maintain a record of non-obvious business logic, domain-specific terminology, and custom internal tools.
- Contextual Recall: When switching between tasks, the agent can call
memory_get_contextto instantly remember the relevant parts of the system it worked on last.
3. Rapidly Changing Codebases
When code is evolving fast, documentation often lags behind. Open-Memory helps keep pace by:
- Tracking In-Flight Changes: Store ongoing refactorings and temporary architectural shifts that haven't been finalized yet.
- Delta Memory: Use
memory_encode_textto capture technical debt, "TODOs" discussed in chat, and emerging patterns before they are formally documented. - Change History: Query how a specific module's purpose or implementation has evolved over several iterations based on previous developer-AI interactions.
4. Deep Analysis Before High-Stakes Decisions
Before architectural changes, major refactors, or complex feature work, agents can call memory_deep_analyze to get a rich structured report on any topic:
- Key Entities by Centrality: Surfaces the most connected concepts in the graph relevant to your topic
- Cluster Detection: Reveals hidden communities of related decisions, patterns, and code structures
- Temporal Reasoning: Shows how a technology, decision, or pattern evolved over calendar quarters
- Contradiction Detection: Automatically flags conflicting facts stored across different sessions
- Suggested Next Steps: Actionable pointers for deeper investigation before you commit to a direction
Tools
Write
| Tool | Description | LLM Required |
| -------------------------- | ------------------------------------------------------------------ | :----------: |
| memory_add_entities | Store entities (people, tools, concepts, code patterns, decisions) | ❌ |
| memory_add_relations | Store relationships between entities | ❌ |
| memory_save_conversation | Save conversation snapshots for history | ❌ |
| memory_encode_text | Auto-extract entities & relations from raw text via LLM | ✅ |
Read
| Tool | Description |
| ---------------------- | ------------------------------------------ |
| memory_search | Hybrid semantic + graph search |
| memory_get_entity | Get entity details with relationships |
| memory_list_entities | List entities with filtering/pagination |
| memory_get_relations | Get relationships for an entity |
| memory_get_context | Get formatted context for prompt injection |
Analysis
| Tool | Description | LLM Required |
| --------------------- | ------------------------------------------------------------------------------- | :----------: |
| memory_deep_analyze | Multi-pass deep analysis: centrality, clusters, temporal trends, contradictions | ❌ |
Management
| Tool | Description |
| ---------------------- | ------------------------------------------------ |
| memory_delete_entity | Remove entity and its edges |
| memory_consolidate | Merge duplicates, prune stale nodes, infer edges |
| memory_status | Graph health stats |
Architecture
src/
├── index.ts # Entry point (stdio transport)
├── types.ts # Core type definitions
├── constants.ts # Configuration constants
├── storage/
│ ├── json-store.ts # Local JSON file backend
│ ├── neo4j-store.ts # Neo4j graph database backend
│ └── factory.ts # Storage backend factory
├── encoding/
│ ├── embedder.ts # Offline n-gram embeddings
│ └── pipeline.ts # Server-side encoding pipeline
├── llm/
│ ├── provider.ts # LLM provider factory
│ ├── openai-provider.ts # OpenAI-compatible provider
│ └── prompts.ts # Extraction prompts
├── retrieval/
│ └── search.ts # Hybrid search engine
├── analysis/
│ └── deep-analyzer.ts # Multi-pass deep analysis engine
├── evolution/
│ └── consolidator.ts # Memory consolidation
├── tools/
│ └── memory-tools.ts # MCP tool definitions
└── resources/
└── context-resource.ts # MCP resources- Storage: Pluggable backend — JSON file (zero-config) or Neo4j (production)
- Embeddings: Offline n-gram hashing by default, LLM embeddings when configured
- Retrieval: Hybrid text + semantic + graph traversal with weighted scoring
- Evolution: Duplicate merging, transitive edge inference, stale node pruning
- Encoding: Optional server-side LLM pipeline (OpenAI, Ollama, Azure, etc.)
- Transport: stdio (standard for IDE integrations)
References & Inspiration
Core Research and Surveys
- Yang, C., et al. (2026). Graph-based Agent Memory: Taxonomy, Techniques, and Applications. This comprehensive survey provides the foundational taxonomy for structured topological models of experience, covering the memory lifecycle of extraction, storage, retrieval, and evolution.
- Yusuke, S. (2026). Graph-Based Agent Memory: A Complete Guide to Structure, Retrieval, and Evolution. A detailed synthesis of design patterns for AI agents to solve context window limitations using graph structures.
Architecture and Implementation Guides
- Lyon, W. (GraphGeeks). Building Intelligent Memory: Graph Databases for AI Agent Context and Retrieval. A practical implementation guide focusing on Neo4j, Dgraph, and the Model Context Protocol (MCP) to perform context engineering and solve "AI amnesia".
- MAGMA Architecture. Memory-Augmented Graph-based Multi-Agent Architecture. This source details the four-layer graph model (Semantic, Temporal, Causal, and Entity) used for long-horizon task reasoning and dual-stream memory evolution.
Key Protocols
- Model Context Protocol (MCP): The standardized protocol used in this repository to expose graph memory search and storage tools to agents in VS Code and Google Antigravity.
Implementation Note: This project, open-graph-memory-mcp, is an implementation of the Model Context Protocol (MCP) specifically designed to realize the Temporal and Knowledge Memory structures proposed in the research cited above.
License
MIT
