openclaw-mem
v1.3.0
Published
Persistent memory system for OpenClaw - captures conversations, generates summaries, and injects context into new sessions
Downloads
634
Maintainers
Readme
OpenClaw-Mem
A persistent memory system for OpenClaw that automatically captures conversations, generates summaries, and injects relevant context into new sessions.
Features
- Persistent Memory - Context survives across sessions
- Progressive Disclosure - Shows index first, fetch details on demand
- Hybrid Search - Full-text + LIKE search with CJK support
- AI Compression - Automatic summarization of observations
- Token Efficient - Only loads what's relevant
- Real-time Capture - Records messages as they happen
- MCP Compatible - Model Context Protocol server included
- HTTP API - REST API for memory queries
Installation
As OpenClaw Hook
# Clone to OpenClaw hooks directory
git clone https://github.com/wenyupapa-sys/openclaw-mem.git ~/.openclaw/hooks/openclaw-mem
cd ~/.openclaw/hooks/openclaw-mem
npm installAs npm Package
npm install openclaw-mem⚠️ Important: npm installation does NOT automatically prompt for API key configuration. You MUST manually configure your DeepSeek API key after installation. See Configuration section below.
After npm install, choose one of these methods:
# Method 1: Run the setup wizard
npx openclaw-mem-setup
# Method 2: Set environment variable directly
export DEEPSEEK_API_KEY="your-deepseek-api-key"
# Add this line to your ~/.bashrc or ~/.zshrc to persistQuick Start
Install the hook (see above)
Run setup - configure your DeepSeek API key (prompted automatically after install)
# Or run manually later npm run setup # or npx openclaw-mem-setupRestart OpenClaw to load the hook
Start chatting - conversations are automatically saved
Query memories - ask "what did we discuss before?" and the AI will search the memory database
Events Captured
| Event | Description |
|-------|-------------|
| gateway:startup | Initialize memory system |
| agent:bootstrap | Inject historical context |
| agent:response | Capture assistant responses |
| agent:stop | Save session summary |
| command:new | Save session before reset |
| tool:post | Capture tool usage |
| user:prompt | Capture user messages |
API Reference
HTTP API (Port 18790)
# Search memories
curl -s -X POST "http://127.0.0.1:18790/search" \
-H "Content-Type: application/json" \
-d '{"query":"keyword","limit":10}'
# Get observation details
curl -s -X POST "http://127.0.0.1:18790/get_observations" \
-H "Content-Type: application/json" \
-d '{"ids":[123,124]}'
# Get timeline context
curl -s -X POST "http://127.0.0.1:18790/timeline" \
-H "Content-Type: application/json" \
-d '{"anchor":123}'
# Health check
curl "http://127.0.0.1:18790/health"Shell Scripts
# Search (handles CJK encoding automatically)
~/.openclaw/hooks/openclaw-mem/mem-search.sh "关键词" 10
# Get details
~/.openclaw/hooks/openclaw-mem/mem-get.sh 123 124 125MCP Server
# Start MCP server (stdio mode)
node mcp-server.jsMCP Tools:
search- Search memory indextimeline- Get context around an observationget_observations- Fetch full details
Configuration
Environment Variables
# Required for AI summarization (optional but recommended)
export DEEPSEEK_API_KEY="your-deepseek-api-key"
# Optional: Custom DeepSeek endpoint
export DEEPSEEK_BASE_URL="https://api.deepseek.com/v1"
# Optional: Custom model
export DEEPSEEK_MODEL="deepseek-chat"Get your DeepSeek API key at: https://platform.deepseek.com/
Note: Without
DEEPSEEK_API_KEY, the system will still work but won't generate AI summaries for sessions.
OpenClaw Config
Add to your OpenClaw config:
{
"hooks": {
"internal": {
"entries": {
"openclaw-mem": {
"enabled": true,
"observationLimit": 50,
"fullDetailCount": 5
}
}
}
}
}Storage
Data is stored in SQLite at ~/.openclaw-mem/memory.db:
| Table | Description |
|-------|-------------|
| sessions | Session records |
| observations | Tool calls and messages |
| summaries | Session summaries |
| user_prompts | User inputs |
Development
# Run tests
npm test
# Start HTTP API server
npm run api
# Start MCP server
npm run mcp
# Monitor real-time activity
node debug-logger.js3-Layer Retrieval Workflow
For efficient token usage, use progressive disclosure:
- Search → Get index with IDs (~50-100 tokens/result)
- Timeline → Get context around interesting results
- Get Observations → Fetch full details ONLY for filtered IDs
This approach saves ~30% tokens compared to fetching everything.
License
MIT
Contributing
Pull requests welcome! Please ensure tests pass before submitting.
Credits
Inspired by claude-mem plugin architecture.
