npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@coreidentitylabs/open-graph-memory-mcp

v1.0.3

Published

Graph-based agent memory MCP server — extends AI coding assistant memory with persistent knowledge graphs

Downloads

261

Readme

Open-Memory MCP Server

Graph-based agent memory for AI coding assistants — extends context windows by storing entities, relationships, and decisions in a persistent knowledge graph.

Problem

AI coding assistants (Google Antigravity, VS Code GitHub Copilot) lose context as conversations grow. Developers repeatedly re-explain code architecture and past decisions. Open-Memory solves this by giving your AI a persistent, structured memory.

How It Works

Agent-Driven Flow (no API key needed)

You chat with your AI assistant
  ↓ Agent extracts entities/decisions from conversation
  ↓ Calls memory_add_entities / memory_add_relations
  ↓ Entities stored in knowledge graph (JSON or Neo4j)
  ↓ Before next task, agent calls memory_search or memory_get_context
  ↓ Before complex tasks, agent calls memory_deep_analyze for rich multi-pass context
  ↓ Relevant historical context injected into prompt
  = AI remembers your project across sessions

Server-Side Encoding Flow (optional, requires LLM API key)

You pass raw text to memory_encode_text
  ↓ Server-side LLM extracts entities + relationships automatically
  ↓ Entity resolution against existing graph (dedup)
  ↓ LLM-quality embeddings generated
  ↓ Nodes + edges stored
  = Fully automated — no manual entity extraction needed

Installation & Usage

You can run Open-Memory directly without manual installation using npx. This is the recommended way for both VS Code and Claude Desktop.

1. VS Code (via MCP Extension)

  1. Install the MCP extension for VS Code.
  2. Open the extension settings or click the MCP icon in the status bar.
  3. Click "Add MCP Server" and enter:
    • Command: npx
    • Arguments: -y @coreidentitylabs/open-graph-memory-mcp (or npx -y github:YOUR_USERNAME/open-memory if not on NPM yet)

2. Claude Desktop / Antigravity Desktop

Add the following to your MCP configuration file (e.g., %APPDATA%\Claude\claude_desktop_config.json or config.json for Antigravity):

{
  "mcpServers": {
    "open-memory": {
      "command": "npx",
      "args": ["-y", "@coreidentitylabs/open-graph-memory-mcp"],
      "env": {
        "STORAGE_BACKEND": "json",
        "MEMORY_STORE_PATH": "C:/path/to/your/memory.json"
      }
    }
  }
}

3. Manual Development Setup

If you want to contribute or run from source:

# Clone and install dependencies
git clone https://github.com/YOUR_USERNAME/open-memory.git
cd open-memory
npm install

# Build
npm run build

# Run (stdio mode)
node dist/index.js

Environment Variables

# Storage backend: "json" (default) or "neo4j"
STORAGE_BACKEND=json
MEMORY_STORE_PATH=./memory.json

# Neo4j (only if STORAGE_BACKEND=neo4j)
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=password

# Optional: Server-side encoding (memory_encode_text tool)
# Works with OpenAI, Azure OpenAI, Ollama (OpenAI-compat), etc.
# LLM_API_KEY=sk-...
# LLM_BASE_URL=https://api.openai.com/v1
# LLM_CHAT_MODEL=gpt-4o-mini
# LLM_EMBEDDING_MODEL=text-embedding-3-small

MCP Client Configuration

Add to your MCP config (e.g. mcp_config.json):

{
  "mcpServers": {
    "open-memory": {
      "command": "node",
      "args": ["d:/Projects/open-memory/dist/index.js"],
      "env": {
        "STORAGE_BACKEND": "json",
        "MEMORY_STORE_PATH": "./memory.json"
      }
    }
  }
}

Real-World Applications & Use Cases

Open-Memory is designed to solve specific challenges in complex, distributed, and fast-moving development environments:

1. Cross-Repository Microservices

In microservice architectures, knowledge is often fragmented across multiple repositories. Open-Memory acts as a decentralized knowledge bridge, allowing AI agents to:

  • Track Cross-Service Dependencies: Remember how service A interacts with service B via specific API contracts or message schemas.
  • Maintain Architectural Consistency: Store system-wide design patterns and security standards that apply to all repositories.
  • Unified Onboarding: Help new developers (and AI agents) understand the "big picture" by querying a persistent graph of how various services fit together.

2. Guiding AI Agents with Full Context

Traditional context windows are transient. By using a persistent memory, you can "train" your AI agents over time:

  • Persistent Logic & Decisions: Store the "why" behind past architectural choices so the AI doesn't suggest reverting them in future sessions.
  • Project-Specific Knowledge: Maintain a record of non-obvious business logic, domain-specific terminology, and custom internal tools.
  • Contextual Recall: When switching between tasks, the agent can call memory_get_context to instantly remember the relevant parts of the system it worked on last.

3. Rapidly Changing Codebases

When code is evolving fast, documentation often lags behind. Open-Memory helps keep pace by:

  • Tracking In-Flight Changes: Store ongoing refactorings and temporary architectural shifts that haven't been finalized yet.
  • Delta Memory: Use memory_encode_text to capture technical debt, "TODOs" discussed in chat, and emerging patterns before they are formally documented.
  • Change History: Query how a specific module's purpose or implementation has evolved over several iterations based on previous developer-AI interactions.

4. Deep Analysis Before High-Stakes Decisions

Before architectural changes, major refactors, or complex feature work, agents can call memory_deep_analyze to get a rich structured report on any topic:

  • Key Entities by Centrality: Surfaces the most connected concepts in the graph relevant to your topic
  • Cluster Detection: Reveals hidden communities of related decisions, patterns, and code structures
  • Temporal Reasoning: Shows how a technology, decision, or pattern evolved over calendar quarters
  • Contradiction Detection: Automatically flags conflicting facts stored across different sessions
  • Suggested Next Steps: Actionable pointers for deeper investigation before you commit to a direction

Tools

Write

| Tool | Description | LLM Required | | -------------------------- | ------------------------------------------------------------------ | :----------: | | memory_add_entities | Store entities (people, tools, concepts, code patterns, decisions) | ❌ | | memory_add_relations | Store relationships between entities | ❌ | | memory_save_conversation | Save conversation snapshots for history | ❌ | | memory_encode_text | Auto-extract entities & relations from raw text via LLM | ✅ |

Read

| Tool | Description | | ---------------------- | ------------------------------------------ | | memory_search | Hybrid semantic + graph search | | memory_get_entity | Get entity details with relationships | | memory_list_entities | List entities with filtering/pagination | | memory_get_relations | Get relationships for an entity | | memory_get_context | Get formatted context for prompt injection |

Analysis

| Tool | Description | LLM Required | | --------------------- | ------------------------------------------------------------------------------- | :----------: | | memory_deep_analyze | Multi-pass deep analysis: centrality, clusters, temporal trends, contradictions | ❌ |

Management

| Tool | Description | | ---------------------- | ------------------------------------------------ | | memory_delete_entity | Remove entity and its edges | | memory_consolidate | Merge duplicates, prune stale nodes, infer edges | | memory_status | Graph health stats |

Architecture

src/
├── index.ts                  # Entry point (stdio transport)
├── types.ts                  # Core type definitions
├── constants.ts              # Configuration constants
├── storage/
│   ├── json-store.ts         # Local JSON file backend
│   ├── neo4j-store.ts        # Neo4j graph database backend
│   └── factory.ts            # Storage backend factory
├── encoding/
│   ├── embedder.ts           # Offline n-gram embeddings
│   └── pipeline.ts           # Server-side encoding pipeline
├── llm/
│   ├── provider.ts           # LLM provider factory
│   ├── openai-provider.ts    # OpenAI-compatible provider
│   └── prompts.ts            # Extraction prompts
├── retrieval/
│   └── search.ts             # Hybrid search engine
├── analysis/
│   └── deep-analyzer.ts      # Multi-pass deep analysis engine
├── evolution/
│   └── consolidator.ts       # Memory consolidation
├── tools/
│   └── memory-tools.ts       # MCP tool definitions
└── resources/
    └── context-resource.ts   # MCP resources
  • Storage: Pluggable backend — JSON file (zero-config) or Neo4j (production)
  • Embeddings: Offline n-gram hashing by default, LLM embeddings when configured
  • Retrieval: Hybrid text + semantic + graph traversal with weighted scoring
  • Evolution: Duplicate merging, transitive edge inference, stale node pruning
  • Encoding: Optional server-side LLM pipeline (OpenAI, Ollama, Azure, etc.)
  • Transport: stdio (standard for IDE integrations)

References & Inspiration

Core Research and Surveys

  • Yang, C., et al. (2026). Graph-based Agent Memory: Taxonomy, Techniques, and Applications. This comprehensive survey provides the foundational taxonomy for structured topological models of experience, covering the memory lifecycle of extraction, storage, retrieval, and evolution.
  • Yusuke, S. (2026). Graph-Based Agent Memory: A Complete Guide to Structure, Retrieval, and Evolution. A detailed synthesis of design patterns for AI agents to solve context window limitations using graph structures.

Architecture and Implementation Guides

  • Lyon, W. (GraphGeeks). Building Intelligent Memory: Graph Databases for AI Agent Context and Retrieval. A practical implementation guide focusing on Neo4j, Dgraph, and the Model Context Protocol (MCP) to perform context engineering and solve "AI amnesia".
  • MAGMA Architecture. Memory-Augmented Graph-based Multi-Agent Architecture. This source details the four-layer graph model (Semantic, Temporal, Causal, and Entity) used for long-horizon task reasoning and dual-stream memory evolution.

Key Protocols

  • Model Context Protocol (MCP): The standardized protocol used in this repository to expose graph memory search and storage tools to agents in VS Code and Google Antigravity.

Implementation Note: This project, open-graph-memory-mcp, is an implementation of the Model Context Protocol (MCP) specifically designed to realize the Temporal and Knowledge Memory structures proposed in the research cited above.

License

MIT