@sockeye44/better-memory-mcp
v0.8.0
Published
Enhanced MCP server for enabling memory for Claude through a knowledge graph with semantic search, temporal tracking, confidence scores, and entity management
Downloads
20
Readme
Better Memory MCP Server
An enhanced knowledge graph memory system for Claude with temporal tracking, confidence scores, entity archiving, semantic search, and advanced management capabilities. This server enables Claude to maintain persistent memory across conversations with improved organization and data quality features.
Original implementation by Anthropic, PBC
Enhanced by @sockeye44 and Claude Opus
Enhanced Features
This enhanced version includes several improvements over the original memory server:
- Semantic Search: Advanced neural search using ModernColBERT embeddings for understanding meaning and context
- Temporal Tracking: All observations and entities now include timestamps for when they were created
- Confidence Scores: Observations can have confidence scores (0-1) to indicate certainty levels
- Entity Archiving: Soft-delete functionality allows entities to be archived rather than permanently deleted
- Entity Merging: Consolidate duplicate entities while preserving all observations and relations
- Recent Changes View: Query recent activity within a specified time window
- Automatic Backfilling: Existing memories are automatically indexed for semantic search on first use
- Backward Compatibility: Seamlessly handles legacy data format while using enhanced features for new data
Core Concepts
Entities
Entities are the primary nodes in the knowledge graph. Each entity has:
- A unique name (identifier)
- An entity type (e.g., "person", "organization", "event")
- A list of observations
Example:
{
"name": "John_Smith",
"entityType": "person",
"observations": ["Speaks fluent Spanish"]
}Relations
Relations define directed connections between entities. They are always stored in active voice and describe how entities interact or relate to each other.
Example:
{
"from": "John_Smith",
"to": "Anthropic",
"relationType": "works_at"
}Observations
Observations are discrete pieces of information about an entity with enhanced metadata:
- Stored with timestamps and confidence scores
- Attached to specific entities
- Can be added or removed independently
- Should be atomic (one fact per observation)
Example:
{
"entityName": "John_Smith",
"observations": [
{
"content": "Speaks fluent Spanish",
"timestamp": 1734567890123,
"confidence": 0.95
},
{
"content": "Graduated in 2019",
"timestamp": 1734567890124,
"confidence": 1.0
}
]
}API
Tools
create_entities
- Create multiple new entities in the knowledge graph
- Input:
entities(array of objects)- Each object contains:
name(string): Entity identifierentityType(string): Type classificationobservations(string[]): Associated observations
- Each object contains:
- Ignores entities with existing names
create_relations
- Create multiple new relations between entities
- Input:
relations(array of objects)- Each object contains:
from(string): Source entity nameto(string): Target entity namerelationType(string): Relationship type in active voice
- Each object contains:
- Skips duplicate relations
add_observations
- Add new observations to existing entities with optional confidence scores
- Input:
observations(array of objects)- Each object contains:
entityName(string): Target entitycontents(string[]): New observations to addconfidence(number[], optional): Confidence scores (0-1) for each observation
- Each object contains:
- Returns added observations per entity
- Fails if entity doesn't exist
delete_entities
- Remove entities and their relations
- Input:
entityNames(string[]) - Cascading deletion of associated relations
- Silent operation if entity doesn't exist
delete_observations
- Remove specific observations from entities
- Input:
deletions(array of objects)- Each object contains:
entityName(string): Target entityobservations(string[]): Observations to remove
- Each object contains:
- Silent operation if observation doesn't exist
delete_relations
- Remove specific relations from the graph
- Input:
relations(array of objects)- Each object contains:
from(string): Source entity nameto(string): Target entity namerelationType(string): Relationship type
- Each object contains:
- Silent operation if relation doesn't exist
read_graph
- Read the knowledge graph with controllable detail
- Input:
detailLevel(string, optional): "minimal", "summary", or "full" (default: "summary")entityNames(string[], optional): Get full details for specific entities onlyincludeArchived(boolean, optional): Include archived entities (default: false)
- Returns graph structure based on detail level
search_nodes
- Search for nodes based on keyword matching
- Input:
query(string) - Searches across:
- Entity names
- Entity types
- Observation content
- Returns matching entities and their relations
semantic_search (NEW)
- Advanced semantic search using neural embeddings
- Input:
query(string): Natural language search queryk(number, optional): Number of results (default 10)threshold(number, optional): Minimum similarity score 0-1 (default 0)
- Uses ModernColBERT model to understand meaning and context
- Returns entities ranked by semantic similarity
- Automatically falls back to keyword search if unavailable
open_nodes
- Retrieve specific nodes by name
- Input:
names(string[]) - Returns:
- Requested entities
- Relations between requested entities
- Silently skips non-existent nodes
merge_entities
- Merge one entity into another, combining observations and updating relations
- Input:
sourceName(string): Entity to merge from (will be deleted)targetName(string): Entity to merge into (will be preserved)
- Combines all unique observations and redirects all relations
- Returns success status and merge summary
archive_entity
- Archive an entity (soft delete - hidden from normal queries)
- Input:
entityName(string) - Archived entities are excluded from standard read operations
- Can be unarchived later
unarchive_entity
- Restore a previously archived entity
- Input:
entityName(string) - Makes the entity visible in normal queries again
get_recent_changes
- Get entities, relations, and observations created/modified within specified time
- Input:
hours(number, optional): Hours to look back (default: 24) - Returns:
- Recently created entities
- Recently created relations
- Entities with recent observations
Setup Instructions
Quick Setup (Recommended)
For the full experience including semantic search:
# Clone the repository
git clone https://github.com/sockeye44/better-memory-mcp
cd better-memory-mcp
# Run the setup script
./setup.shThe setup script will:
- Check Python 3.8+ is installed
- Create a Python virtual environment
- Install all Python dependencies including PyTorch and ModernColBERT
- Pre-download the neural model for faster first use
- Install Node.js dependencies
- Build the TypeScript code
Manual Setup
If you prefer to set up manually or the script fails:
# Install Python dependencies
pip install -r requirements.txt
# Install Node.js dependencies
npm install
# Build TypeScript
npm run buildNote: Semantic search requires Python 3.8+ and PyTorch. If these are not available, the server will still work with keyword search only.
Usage with Claude Desktop
Setup
Important: For semantic search to work in Claude Desktop, you need to ensure Python is accessible. See MCP_INTEGRATION.md for detailed setup instructions.
Quick Setup for Claude Desktop
- Option A: Using absolute paths (most reliable):
{
"mcpServers": {
"memory": {
"command": "node",
"args": ["/absolute/path/to/better-memory-mcp/dist/index.js"],
"env": {
"MEMORY_FILE_PATH": "/Users/yourusername/.claude/memory.json",
"BETTER_MEMORY_DIR": "/absolute/path/to/better-memory-mcp",
"PATH": "/absolute/path/to/better-memory-mcp/venv/bin:$PATH"
}
}
}
}- Option B: Standard config (if Python is in system PATH):
Add this to your claude_desktop_config.json:
Docker
{
"mcpServers": {
"memory": {
"command": "docker",
"args": ["run", "-i", "-v", "claude-memory:/app/dist", "--rm", "mcp/better-memory"]
}
}
}NPX
{
"mcpServers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@sockeye44/better-memory-mcp"
]
}
}
}NPX with custom setting
The server can be configured using the following environment variables:
{
"mcpServers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@sockeye44/better-memory-mcp"
],
"env": {
"MEMORY_FILE_PATH": "/path/to/custom/memory.json"
}
}
}
}MEMORY_FILE_PATH: Path to the memory storage JSON file (default:memory.jsonin the server directory)
VS Code Installation Instructions
For quick installation, use one of the one-click installation buttons below:
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and typing Preferences: Open Settings (JSON).
Optionally, you can add it to a file called .vscode/mcp.json in your workspace. This will allow you to share the configuration with others.
Note that the
mcpkey is not needed in the.vscode/mcp.jsonfile.
NPX
{
"mcp": {
"servers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@sockeye44/better-memory-mcp"
]
}
}
}
}Docker
{
"mcp": {
"servers": {
"memory": {
"command": "docker",
"args": [
"run",
"-i",
"-v",
"claude-memory:/app/dist",
"--rm",
"mcp/better-memory"
]
}
}
}
}System Prompt
The prompt for utilizing memory depends on the use case. Changing the prompt will help the model determine the frequency and types of memories created.
Here is an example prompt for chat personalization. You could use this prompt in the "Custom Instructions" field of a Claude.ai Project.
Follow these steps for each interaction:
1. User Identification:
- You should assume that you are interacting with default_user
- If you have not identified default_user, proactively try to do so.
2. Memory Retrieval:
- Always begin your chat by saying only "Remembering..." and retrieve all relevant information from your knowledge graph
- Always refer to your knowledge graph as your "memory"
3. Memory
- While conversing with the user, be attentive to any new information that falls into these categories:
a) Basic Identity (age, gender, location, job title, education level, etc.)
b) Behaviors (interests, habits, etc.)
c) Preferences (communication style, preferred language, etc.)
d) Goals (goals, targets, aspirations, etc.)
e) Relationships (personal and professional relationships up to 3 degrees of separation)
4. Memory Update:
- If any new information was gathered during the interaction, update your memory as follows:
a) Create entities for recurring organizations, people, and significant events
b) Connect them to the current entities using relations
c) Store facts about them as observationsSemantic Search
The semantic search feature uses the state-of-the-art ModernColBERT model from Hugging Face to provide intelligent, context-aware search capabilities:
How It Works
- Automatic Indexing: When the server starts, it automatically builds a semantic index of all your memories
- Neural Understanding: Search queries are understood based on meaning, not just keywords
- Smart Ranking: Results are ranked by semantic similarity, bringing the most relevant memories to the top
- Continuous Learning: New memories are automatically added to the semantic index
Example Queries
Semantic search understands context and meaning:
- "recent work on video analysis" - Finds memories about video-related projects
- "challenges with team collaboration" - Finds memories about teamwork issues
- "machine learning optimizations" - Finds memories about ML performance improvements
- "that project with the owl mascot" - Finds memories even with vague descriptions
Performance
- First-time model download: ~500MB (cached for future use)
- Index building: ~1-2 seconds per 1000 observations
- Search latency: <100ms for most queries
- Memory usage: ~1GB with model loaded
Fallback Behavior
If Python or the model are unavailable:
- The server continues to work normally
- Search automatically falls back to keyword matching
- All other features remain fully functional
Building
Docker:
docker build -t mcp/better-memory -f src/memory/Dockerfile . License
This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
