@wx139/memory-amem
v1.1.1
Published
OpenClaw A-mem-sys memory plugin (Zettelkasten long-term memory)
Downloads
268
Maintainers
Readme
@wx139/memory-amem
Zettelkasten cognitive memory for OpenClaw agents. Powered by xmem-server.
Highlights
- Zero-LLM hot path — storing and recalling memories costs zero LLM calls
- 6-tier cognitive hierarchy — Working, Episodic Raw, Sessions, Semantic Notes, Profile, Procedural Rules
- Batch consolidation — raw memories cluster into Zettelkasten notes (1 LLM call per cluster)
- Always-loaded context — user profile, procedural rules, and system memories injected every prompt
- Web dashboard at
:8100/ui— browse, edit, search, and manage all memory tiers /memcommands across all channels — Telegram, Slack, CLI
Architecture
User ←→ OpenClaw Gateway ←→ Plugin (hooks + tools) ←→ HTTP :8100 ←→ xmem-server
├── SQLite (memories, profile, rules)
├── ChromaDB (vector embeddings)
└── LLM (via OpenClaw gateway)Memory Hierarchy
┌─────────────────────────────────────────────────────────┐
│ Always-Loaded Context (injected every prompt) │
│ User Profile │ Procedural Rules │ System Memories│
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ Searchable Knowledge │
│ │
│ Semantic Notes (Zettelkasten) │
│ Consolidated from raw memories (1 LLM call/cluster) │
│ Hybrid search: 0.7 vector + 0.3 BM25 + decay + MMR │
│ │
│ Episodic Raw + Sessions │
│ Zero-LLM storage, vector search, temporal queries │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ Working Memory (radical mode only, ephemeral) │
│ Sliding-window buffer in plugin memory │
└─────────────────────────────────────────────────────────┘Data Flow
User message arrives
│
├── before_prompt_build: inject profile + rules + system
├── before_agent_start: auto-recall relevant memories
│
├── Agent runs (can call memory_recall, memory_store, etc.)
│
└── agent_end: auto-capture user + assistant messages as raw
└── when threshold reached → batch consolidationUse Cases
- Personal assistant — remembers preferences, schedule, contacts across sessions
- Telegram group bot — auto-captures group discussions, recalls on demand
- Developer copilot — stores coding conventions, project decisions, debug patterns
- Customer support — builds user profile from interactions, recalls ticket history
- Research assistant — accumulates papers/notes, consolidates into knowledge base
- Team knowledge base — shared memory across channels (Slack + Telegram + CLI)
Code Structure
| File | Lines | Purpose |
|------|------:|---------|
| index.ts | 1109 | Main plugin: 9 tools, 5 hooks, CLI commands, /mem routing |
| amem-client.ts | 451 | HTTP client for xmem-server REST API |
| mem-commands.ts | 333 | /mem sub-command parsing and handlers |
| heuristics.ts | 137 | Capture triggers, injection detection, category classification |
| config.ts | 94 | Plugin config schema + defaults (TypeBox) |
| working-memory.ts | 68 | Sliding-window buffer for radical mode |
| formatters.ts | 52 | Prompt formatting helpers |
Install
openclaw plugins install @wx139/memory-amemPrerequisites
Python 3.9+ and xmem-server must be running:
git clone https://github.com/WujiangXu/XMem.git
cd XMem/xmem-server
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python server.py --port 8100This installs sentence-transformers (local embeddings), ChromaDB (vector DB), FastAPI, and LiteLLM. No API key required for basic operation — consolidation needs the OpenClaw gateway or Ollama.
Configuration
Add to your OpenClaw config (~/.openclaw/openclaw.json):
{
"plugins": {
"enabled": true,
"slots": { "memory": "memory-amem" },
"entries": {
"memory-amem": {
"enabled": true,
"config": {
"url": "http://localhost:8100",
"autoCapture": true,
"autoRecall": true,
"mode": "modest"
}
}
}
}
}Config Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| url | string | required | xmem-server URL |
| autoCapture | boolean | true | Auto-store important info from conversations |
| autoRecall | boolean | true | Auto-inject relevant memories before agent runs |
| mode | "modest" | "radical" | "modest" | modest = tools + hooks only; radical = also offload compaction and inject always-loaded context |
| captureMode | "triggers" | "all" | "triggers" | triggers = keyword patterns; all = every message (recommended for Telegram groups) |
| captureMaxChars | number | 500 | Max message length eligible for auto-capture (100–10000) |
| autoRecallLimit | number | 3 | Number of memories auto-injected per agent turn (1–20) |
| workingMemorySize | number | 10 | Items in radical mode working memory buffer (1–50) |
| retrievalMode | "lean" | "normal" | "normal" | lean = 2 results, minScore 0.7; normal = 5 results, minScore 0.3 |
| serverDir | string | — | Path to xmem-server repo (auto-starts server if not running) |
| pythonPath | string | "python" | Python executable for auto-start |
| serverArgs | string | "" | Extra CLI args for server.py |
LLM Backend for Consolidation
xmem-server needs an LLM for batch-consolidating raw memories into semantic notes. It auto-detects from the environment in this priority:
- OpenClaw gateway (preferred) — routes through
localhost:18789/v1/chat/completions, no separate API key needed - Ollama — local fallback (
llama3.2:3b)
Direct API backends (OpenAI, OpenRouter) are blocked. All LLM calls route through the OpenClaw gateway for centralized API management.
To use the OpenClaw gateway, enable the endpoint in ~/.openclaw/openclaw.json:
{
"gateway": {
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
}
}Set the gateway auth token for xmem-server:
export OPENCLAW_GATEWAY_TOKEN=<token from openclaw.json gateway.auth.token>Or pass it via serverArgs:
"serverArgs": "--preset openclaw --port 8100"See the bundled skill openclaw-llm-proxy for full setup details including the API lane patch.
/mem Commands
Works in Telegram, Slack, CLI, or any OpenClaw channel:
| Command | Description |
|---------|-------------|
| /mem | Memory stats dashboard |
| /mem search <query> | Search memories |
| /mem profile | Show user profile |
| /mem profile set <key> <value> | Update profile field |
| /mem rules | List procedural rules |
| /mem rules add <rule> | Add a rule |
| /mem compress | Trigger consolidation |
| /mem clear <type\|category> | Bulk delete |
| /mem forget <query> | Search + delete matching memories |
| /mem important <query> | Mark a memory as important |
| /mem instruct <text> | Natural language instruction |
| /mem evolve | AI-powered memory optimization |
| /mem import <source> | Import from OpenClaw (memory/sessions/claude-code/all) |
| /mem ui | Web UI link |
| /mem export | Export stats |
| /mem prune | Remove forgotten memories |
| /mem help | Show all commands |
Agent Tools
The plugin registers these tools for the LLM agent:
| Tool | Description |
|------|-------------|
| memory_recall | Cross-tier hybrid search (semantic notes, episodic raw, sessions) |
| memory_store | Explicitly store a memory (deduplication built-in) |
| memory_forget | Delete specific memories (by query or ID), GDPR-compliant |
| memory_update | Edit existing memory content, tags, or category |
| memory_compress | Trigger batch consolidation of raw → Zettelkasten notes |
| memory_clear | Bulk delete by type, category, or tag (confirmation required) |
| memory_stats | Show counts per type/category, pending consolidation, storage info |
| memory_profile | Read or update user profile fields |
| memory_instruct | Process natural language memory instructions |
Web Dashboard
The web UI at :8100/ui provides a 7-tab interface for managing all memory tiers:
| Tab | Description | |-----|-------------| | Dashboard | Overview: counts per tier, categories, pending consolidation | | Memories | Browse, search, edit, delete memories across all tiers | | Profile | Edit user profile fields (name, language, timezone, etc.) | | Procedural | Manage behavioral rules with priorities | | System | Fixed instructions and domain context | | Graph | Zettelkasten link graph visualization | | Settings | Consolidation threshold, decay, MMR, search weights |
License
MIT
