@gravityzenai/memory
v0.3.0
Published
Local memory substrate for AI agents: persistent recall, hybrid retrieval, governed evolution, and memory safety.
Maintainers
Readme
GravityMemory
Why GravityMemory?
Your AI agent does not need more memory. It needs governed memory.
Most memory systems for AI agents are cloud-dependent, store everything without discrimination, and have zero safety controls. GravityMemory solves four problems that no other system addresses together:
| Problem | How GravityMemory Solves It | |---|---| | Agents forget context between sessions | Persistent recall with FTS5 + vector search + knowledge graph | | Memory fills up with garbage | Lifecycle management: active → stale → archived → protected | | No defense against memory poisoning | Memory Guard blocks prompt injection and quarantines suspicious data | | No control over agent evolution | Governance layer: approve, reject, freeze, rollback any change |
Features
| Feature | Description | |---|---| | 🧠 Persistent Memory | Observations, sessions, facts, skills, and experiences survive across sessions | | 🔍 Hybrid Search | FTS5/BM25 + dense vector search + knowledge graph + temporal queries | | 🛡️ Memory Guard | Detects and blocks prompt injection, memory poisoning, and contradictions | | 📦 Context Compiler | Builds optimized context windows within token budgets | | ⚖️ Governance | Approve/reject/freeze/rollback controls for all memory changes | | 🧬 Evolution Engine | Tracks experience, skill effectiveness, and controlled adaptation | | 🤖 Autonomous Brain Loop | Local LLM-powered cognitive maintenance (Ollama) — runs on its own | | 🔒 100% Local | SQLite database, no cloud, no API keys, no telemetry |
Quick Start
One-line install (recommended)
npx @gravityzenai/memoryThat's it. GravityMemory starts as an MCP server, creates its database at ~/.gravitymemory/memory.db, and is ready to use.
From source
git clone https://github.com/GravityZenAI/GravityMemory.git
cd GravityMemory
npm install
npm run build
node dist/index.jsAgent Setup
Add GravityMemory to your AI agent's MCP configuration:
Claude Code
claude mcp add gravitymemory npx -y @gravityzenai/memoryAntigravity / Gemini CLI
Add to your MCP config file (mcp_config.json or mcp.json):
{
"mcpServers": {
"gravitymemory": {
"command": "npx",
"args": ["-y", "@gravityzenai/memory"]
}
}
}Codex / OpenCLAW / Other MCP Clients
Same format — just add the gravitymemory entry to your MCP server config.
Autonomous Brain Loop
GravityMemory includes an autonomous cognitive engine powered by local LLMs via Ollama.
What it does:
- Analyzes your memory every 6 hours (configurable)
- Detects stale observations, duplicate entries, and knowledge gaps
- Proposes consolidation, cleanup, and graph enrichment
- All reasoning is stored and auditable
Requirements:
- Ollama running locally
- A reasoning model (recommended:
qwen3.5:9borqwen3.5:14b)
Auto-detection: If Ollama is available, GravityMemory automatically detects the best model installed. It prefers models in the 9B–14B range for the optimal speed/quality tradeoff.
Configuration
| Variable | Default | Description |
|---|---|---|
| GRAVITYMEMORY_LOOP_INTERVAL | 6 | Hours between autonomous cycles (set to 0 to disable) |
| GRAVITYMEMORY_OLLAMA_MODEL | (auto-detect) | Override model selection (e.g. qwen3.5:14b) |
| GRAVITYMEMORY_OLLAMA_URL | http://127.0.0.1:11434 | Ollama API endpoint |
Runtime Controls
Change the model without restarting:
gm_runtime idle list_models → See all available Ollama models
gm_runtime idle set_model → Switch to a different model instantly
gm_runtime idle status → Check Brain Loop status and next scheduled runModel Recommendations
| Model | Size | Quality | Speed | Best For |
|---|---|---|---|---|
| qwen3.5:9b | ~6GB | ⭐⭐⭐⭐⭐ | Fast | Recommended default |
| qwen3.5:14b | ~10GB | ⭐⭐⭐⭐⭐ | Medium | Best quality for consumer GPUs |
| phi4:14b | ~9GB | ⭐⭐⭐⭐ | Medium | Good alternative |
| qwen3.5:32b | ~20GB | ⭐⭐⭐⭐⭐ | Slow | Premium hardware only |
No Ollama? No problem. GravityMemory works perfectly without it — you just won't have autonomous maintenance. All other features work normally.
Embeddings (Vector Search)
GravityMemory includes local embeddings for semantic search. No API keys needed.
| Model | Size | Quality | Language |
|---|---|---|---|
| minilm (default) | ~23MB | Good | English |
| nomic | ~130MB | Great | English |
| jina-es | ~161MB | Great | English + Spanish |
| bge-m3 | ~2.2GB | Best | Multilingual (100+ languages) |
Set via environment variable:
GRAVITYMEMORY_EMBEDDING_MODEL=jina-esMCP Tools Reference
GravityMemory exposes 15+ tools through the MCP protocol:
| Tool | Description |
|---|---|
| gm_save | Save an observation to persistent memory |
| gm_search | Search memory (unified, FTS, vector, temporal) |
| gm_observe | Get, update, delete, or browse observations |
| gm_session | Start, end, summarize coding sessions |
| gm_graph | Knowledge graph: add facts, query, traverse, find paths |
| gm_compile | Build optimized context windows for agent prompts |
| gm_guard | Check content for prompt injection and contradictions |
| gm_govern | Approve, reject, freeze, rollback changes |
| gm_evolve | Evolution engine: run GEPA loops, scan skills, record outcomes |
| gm_experience | Record task outcomes for the evolution engine |
| gm_loop | Run, inspect, and audit Brain Loop cycles |
| gm_runtime | Runtime controls: idle status, model management, LLM budget |
| gm_maintain | Maintenance: health checks, consolidation, decay, self-edit |
| gm_profile | Key-value store for user/agent/project profiles |
| gm_portable | Export, import, and migrate memory data |
| gm_brain | Read-only audit API for cognitive processes |
| gm_intel | Intelligence: context injection, risk analysis, error clustering |
| gm_backup | Safe binary backup of the entire database |
| gm_dashboard | Start/stop the local web dashboard |
| gm_models | List and configure embedding models |
Architecture
┌─────────────────────────────────────────────────────┐
│ MCP Transport (stdio) │
├─────────────────────────────────────────────────────┤
│ Tool Layer (15+ tools) │
├──────────┬──────────┬───────────┬────────────────────┤
│ Memory │ Search │ Guard │ Evolution │
│ Store │ Engine │ Layer │ Engine │
├──────────┼──────────┼───────────┼────────────────────┤
│ FTS5 │ Vector │ Prompt │ Brain Loop │
│ Index │ Index │ Injection │ (Ollama) │
│ │ (ONNX) │ Detection │ │
├──────────┴──────────┴───────────┴────────────────────┤
│ SQLite (better-sqlite3) │
│ ~/.gravitymemory/memory.db │
└─────────────────────────────────────────────────────┘Security Model
| Mode | Purpose |
|---|---|
| readonly | Query only. Blocks all writes. |
| safe | Default. Allows normal writes, blocks destructive actions. |
| admin | Allows destructive/admin operations. Use only for maintenance. |
See SECURITY.md and Threat Model for details.
Comparison
| Feature | GravityMemory | Mem0 | Letta | Engram | |---|:---:|:---:|:---:|:---:| | 100% Local | ✅ | ❌ (API) | ❌ (server) | ✅ | | MCP Native | ✅ | ❌ | ❌ | ✅ | | Hybrid Search (FTS + Vector) | ✅ | Partial | ❌ | FTS only | | Knowledge Graph | ✅ | ❌ | ❌ | ❌ | | Memory Guard (anti-injection) | ✅ | ❌ | ❌ | ❌ | | Governance (approve/reject/freeze) | ✅ | ❌ | ❌ | ❌ | | Autonomous Brain Loop | ✅ | ❌ | ❌ | ❌ | | Auto-detect local LLM | ✅ | ❌ | ❌ | ❌ | | Evolution Engine | ✅ | ❌ | ❌ | ❌ | | Context Compiler | ✅ | ❌ | ❌ | ❌ | | Zero Config | ✅ | ❌ | ❌ | ✅ |
Documentation
| Document | Description | |---|---| | Product Vision | Mission and core pillars | | Architecture Boundaries | Ecosystem separation rules | | Threat Model | Attack surface and mitigations | | Security Gate v1 | Security hardening details |
Development
npm install # Install dependencies
npm run build # Compile TypeScript
npm run dev # Run in development mode
npm test # Run all tests (244 tests)
npm run typecheck # Type-check without buildingLicense
Apache-2.0
Built by GravityZenAI.
