@nowledge/openclaw-nowledge-mem
v0.8.16
Published
Nowledge Mem memory plugin for OpenClaw, local-first personal knowledge base
Downloads
1,935
Readme
Nowledge Mem OpenClaw Plugin
Local-first knowledge graph memory for OpenClaw agents, powered by Nowledge Mem.
Your AI tools forget. We remember. Everywhere. This plugin gives OpenClaw both lossless session memory and the same graph-connected memory layer your other AI tools can use too. Every OpenClaw conversation is captured as a searchable thread, important moments can distill into linked memories with sourceThreadId, and knowledge from Claude, Cursor, web chats, and imported threads stays searchable inside OpenClaw. Local-first by default, with remote mode when you want a shared server.
This is not a flat note store. Nowledge Mem links related knowledge into a graph, tracks how ideas evolve, deduplicates repeated context, and can run background intelligence to produce daily briefings, contradiction checks, and crystals from converging evidence. OpenClaw can search that same growing memory layer instead of starting from scratch every time.
Requirements
- Nowledge Mem desktop app or
nmemCLI - OpenClaw >= 2026.4.5 (corpus supplement requires this version; core features work on >= 2026.3.7)
Installation
openclaw plugins install clawhub:@nowledge/openclaw-nowledge-memOpenClaw's installer writes the install record, enables the plugin, and switches the memory slot to openclaw-nowledge-mem. Restart OpenClaw after install to load it.
If you prefer the default resolver path, bare package names also work because OpenClaw resolves external plugins from ClawHub first and then falls back to npm:
openclaw plugins install @nowledge/openclaw-nowledge-memIf you need a pinned local checkout for validation or development, use local linking:
git clone https://github.com/nowledge-co/community.git
cd community/nowledge-mem-openclaw-plugin
openclaw plugins install --link .Agent-assisted setup
If another AI agent is helping you install this plugin, point it at the bundled SKILL.md.
Copyable instruction:
Read https://raw.githubusercontent.com/nowledge-co/community/main/nowledge-mem-openclaw-plugin/SKILL.md and follow it to install, configure, verify, and explain Nowledge Mem for OpenClaw.That guide is written for AI agents, not for humans. It handles local vs remote mode, optional API auth, trust pinning, restart, verification, and next steps.
Recommended hardening
If you keep non-bundled plugins on an explicit trust allowlist, add:
{
"plugins": {
"allow": ["openclaw-nowledge-mem"]
}
}plugins.allow is id-based, not source-based. If you also use plugins.load.paths or openclaw plugins install --link, review those paths too. An allowlist trusts the active plugin with that id; it does not pin where it came from.
Manual config or verification
If you manage config manually, or you want to verify what the installer selected, this is the expected shape:
{
"plugins": {
"slots": { "memory": "openclaw-nowledge-mem" },
"entries": {
"openclaw-nowledge-mem": {
"enabled": true
}
}
}
}By default the agent gets 10 tools on demand, a behavioral skill that teaches it when to search and save, a short always-on system hint, and end-of-session capture. Working Memory injection is optional and controlled by sessionContext.
Remote mode
Connect to a Nowledge Mem server running elsewhere (a VPS, a home server, or a shared team instance). See remote access guide for server setup.
Enable the plugin the same way as local mode, then set apiUrl and apiKey in the OpenClaw plugin settings.
Or use the shared config file at ~/.nowledge-mem/config.json so OpenClaw, nmem, Bub, Claude Code, and other integrations all point at the same remote Mem:
{
"apiUrl": "https://nowledge.example.com",
"apiKey": "your-api-key-here"
}Legacy ~/.nowledge-mem/openclaw.json is still honored first for backward compatibility.
The resolved apiUrl and apiKey are reused across the plugin: CLI-backed memory tools and API-backed thread sync both talk to the same backend. The apiKey is never passed as a CLI argument and is never logged.
Spaces
Spaces are optional. OpenClaw should choose one ambient lane only when the profile or process already belongs to one real project or agent lane.
The OpenClaw plugin settings can own that lane directly:
{
"space": "Research Agent",
"spaceTemplate": "agent-${OPENCLAW_AGENT_NAME}"
}Use space when one OpenClaw profile belongs to one stable lane. Use spaceTemplate only when your launcher or host environment already sets a trustworthy variable that identifies the lane. If your OpenClaw runtime does not expose per-agent identity to plugins, do not fake it. Use one profile/process per lane or stay on Default.
If you are running OpenClaw from a launcher or script with no richer config surface, you can still set one session-wide fallback lane with:
NMEM_SPACE="Research Agent"Nowledge Mem's CLI-backed Working Memory, memory search/save, and the plugin's API-backed thread/feed fallbacks will then stay in that lane together. There is no second OpenClaw-only vault setting; the shared Mem boundary is still one hidden space key, but humans and agents should normally work with the space name instead.
Shared spaces, default retrieval, and agent guidance still live in Mem's own space profile. OpenClaw chooses the lane and preserves it across transports; it should not duplicate the profile semantics.
Configure via WebUI of OpenClaw
How It Works
Per-Turn Flow
Every user message triggers hooks before the agent sees it, then the agent decides which tools to call.
flowchart TD
A["User sends message"] --> B["before_prompt_build hook"]
B --> C["Behavioral guidance(cacheable system hint)"]
B -.->|"sessionContext: true"| D["Working Memory +relevant memories (~1-2 KB)"]
C --> E["Agent processes message(10 tools available)"]
D --> E
E --> F{"Needs past context?"}
F -->|"Yes"| G["memory_search"]
G --> R
E --> H{"Something worth keeping?"}
H -->|"Yes"| I["nowledge_mem_save(typed, labeled, temporal)"]
I --> R
E --> R["Respond to user"]When Each Tool Gets Called
The behavioral skill and always-on hook nudge the agent to search before answering and save after deciding. Here's when each tool fires:
| Scenario | Tool | What happens |
|----------|------|--------------|
| User asks a question | memory_search | Search knowledge base before answering. Also returns relatedThreads snippets and sourceThreadId. |
| Agent needs full memory text | memory_get | Read one memory by ID or path. MEMORY.md alias reads Working Memory. Returns sourceThreadId. |
| Decision made, insight learned | nowledge_mem_save | Structured save with unit_type, labels, event_start, importance. |
| "What was I doing last week?" | nowledge_mem_timeline | Activity feed grouped by day. Supports exact date ranges. |
| "How is X connected to Y?" | nowledge_mem_connections | Graph walk: edges, entities, EVOLVES chains, document provenance. |
| Need today's focus/priorities | nowledge_mem_context | Read Working Memory daily briefing. Supports section-level patch. |
| Memory has sourceThreadId | nowledge_mem_thread_fetch | Fetch full source conversation. Pagination via offset + limit. |
| "Find our discussion about X" | nowledge_mem_thread_search | Search past conversations by keyword. Returns matched snippets. |
| "Forget X" | nowledge_mem_forget | Delete by ID or search query. |
| "Is my setup working?" | nowledge_mem_status | Show effective config, backend connectivity, and version. |
Session Lifecycle (Automatic Capture)
When sessions end, conversations are captured and optionally distilled. No user action needed.
flowchart TD
A["Session lifecycle event"] --> B{"Event type"}
B -->|"agent_end"| C["Append messages to thread(idempotent, deduped)"]
B -->|"after_compaction"| D["Append messages to thread(checkpoint only)"]
B -->|"before_reset"| E["Append messages to thread(checkpoint only)"]
C --> F{"sessionDigest enabled+ enough messages+ cooldown passed?"}
F -->|"Yes"| G["LLM triage (~100 tokens):worth distilling?"]
F -->|"No"| H["Done - thread saved"]
G -->|"Decisions, insights,preferences found"| I["Full distillation ->structured memories(with sourceThreadId)"]
G -->|"Routine chat"| H
D --> H
E --> HKey points:
- Thread capture is unconditional: every conversation is saved and searchable via
nowledge_mem_thread_search - Thread sync is incremental: the plugin preserves the real transcript but appends only the unsynced tail instead of replaying the whole session
- LLM distillation only runs at
agent_end, not during compaction/reset checkpoints - Distilled memories carry
sourceThreadId, linking them back to the source conversation - Cooldown (
digestMinInterval, default 300s) prevents burst distillation
Progressive Retrieval (Memory -> Thread -> Messages)
Memories distilled from conversations carry a sourceThreadId. This creates a retrieval chain:
flowchart TD
A["memory_search'PostgreSQL decision'"] --> B["Result includessourceThreadId"]
B --> C["nowledge_mem_thread_fetch(offset=0, limit=50)"]
C --> D["50 messageshasMore: true"]
D --> E["nowledge_mem_thread_fetch(offset=50, limit=50)"]
E --> F["Next page"]Direct conversation search also works:
flowchart LR
G["nowledge_mem_thread_search 'database architecture'"] --> H["Threads with matched snippets"]
H --> I["nowledge_mem_thread_fetch(threadId from results)"]
I --> J["Full messages"]Two entry points:
- From a memory:
memory_searchormemory_getreturnssourceThreadId, then fetch the source conversation - Direct search:
nowledge_mem_thread_searchfinds conversations by keyword, then fetch any result
Corpus Supplement (Dreaming Integration)
When OpenClaw's memory-core is the primary memory slot and Nowledge Mem runs alongside it, you can enable corpusSupplement: true. This registers Nowledge Mem as a searchable corpus inside memory-core's recall pipeline and three-phase dreaming system (light, deep, REM).
What happens:
- Every time memory-core runs
memory_search, it also searches Nowledge Mem's knowledge graph via the supplement. - Recalled Nowledge Mem content accumulates frequency and relevance scores in memory-core's short-term store.
- High-scoring content is promoted to
MEMORY.mdduring deep-phase dreaming (daily, 3 AM). - The plugin's own search-based recall is automatically disabled to avoid duplicates. Working Memory injection continues as before.
When to enable: When memory-core is the memory slot and you want your cross-tool knowledge graph to participate in OpenClaw's native dreaming cycle. Most users have Nowledge Mem as the memory slot directly, so this defaults to false.
Requires OpenClaw >= 2026.4.5.
As of 0.8.15, this works without local source edits. The plugin now stays active beside memory-core for supplement mode, while leaving memory-core's own memory_search / memory_get pair in place.
If OpenClaw stores a top-level dreaming object on this plugin while it owns the memory slot, that is expected. Dreaming remains OpenClaw-native host config; this plugin accepts it so memory-core can run the dreaming engine alongside Nowledge Mem without registration errors.
Three Modes at a Glance
flowchart LR
subgraph "default" ["Default (recommended)"]
direction TB
D1["Every turn: short system hint"]
D2["Agent calls 10 tools on demand"]
D3["Session end: thread capture + LLM distillation"]
end
subgraph context ["Session Context"]
direction TB
C1["Every turn: guidance + Working Memory+ recalled memories (~1-2 KB)"]
C2["Agent calls 10 tools on demand"]
C3["Session end: thread capture + LLM distillation"]
end
subgraph minimal ["Minimal"]
direction TB
M1["Every turn: short system hint"]
M2["Agent calls 10 tools on demand"]
M3["No automatic capture"]
endTools
OpenClaw Memory Compatibility
These satisfy the OpenClaw memory slot contract and activate the "Memory Recall" section in OpenClaw's system prompt.
memory_search - Multi-signal recall using embedding, BM25, label match, graph signals, and recency decay. Returns structured source paths (nowledgemem://memory/<id>) for follow-up with memory_get or nowledge_mem_connections. Also returns relevant past conversation snippets (relatedThreads) and sourceThreadId (link to the conversation the memory was distilled from) when available.
memory_get - Read a specific memory by ID or path. Supports MEMORY.md alias for Working Memory. Returns sourceThreadId when the memory was distilled from a conversation.
Nowledge Mem Native
These reflect capabilities unique to Nowledge Mem's knowledge graph architecture.
nowledge_mem_save - Capture structured knowledge with type classification, labels, and temporal context.
text: "We decided to use PostgreSQL with JSONB for the task events table"
title: "Task events database choice"
unit_type: decision
importance: 0.8
labels: ["backend", "architecture"]
event_start: 2024-03
temporal_context: past
> Saved: Task events database choice [decision] (id: mem_abc) - labels: backend, architecture - event: 2024-03Eight memory types: fact, preference, decision, plan, procedure, learning, context, event. Each becomes a typed node in the knowledge graph. Labels enable filtering in memory_search. event_start records when something happened, not just when you saved it, powering bi-temporal search.
nowledge_mem_context - Read today's Working Memory: focus areas, priorities, unresolved flags, and recent activity. Generated by the Knowledge Agent each morning, updated throughout the day.
nowledge_mem_connections - Explore the knowledge graph around a topic or memory. Returns connected memories, EVOLVES version chains (how understanding has grown), related entities, and source document provenance (which files or URLs knowledge was extracted from).
memoryId: "mem_abc"
> Connected memories:
- PostgreSQL optimization patterns: Use JSONB GIN indexes for...
- Redis caching layer decision: For frequently accessed task lists...
Source documents (provenance):
- api-spec.pdf (file): API specification for task management...
Related entities:
- PostgreSQL (Technology)
- Task Management API (Project)
Knowledge evolution:
- superseded by newer understanding (version chain)nowledge_mem_timeline - Browse your knowledge history chronologically. Use for questions like "what was I working on last week?" or "what happened yesterday?". Groups activity by day: memories saved, documents ingested, insights generated, and more.
last_n_days: 7
> 2026-02-18:
- [Memory saved] UV guide - Python toolchain setup
- [Knowledge extracted from document] api-spec.pdf
2026-02-17:
- [Daily briefing] Focus: NebulaGraph, AI biotech...
- [Insight] Connection between Redis caching and...nowledge_mem_forget - Delete a memory by ID or search query. Supports user confirmation when multiple matches are found.
Thread Tools
nowledge_mem_thread_search - Search past conversations by keyword. Returns matched threads with message snippets and relevance scores. Use when the user asks about a past discussion or wants to find a conversation from a specific time.
nowledge_mem_thread_fetch - Fetch full messages from a conversation thread with pagination. Pass a sourceThreadId from memory results or a threadId from thread search. Supports offset and limit for progressive retrieval of long conversations.
threadId: "openclaw-db-arch-a1b2c3"
offset: 0, limit: 50
-> Thread: "Database architecture discussion" (128 messages)
[user] We need to decide on the database for task events...
[assistant] Based on the requirements, PostgreSQL with JSONB...
... (126 more messages, use offset=50 for next page)Operating Modes
The plugin supports three modes. Choose based on how much you want to guarantee versus how much token budget you're willing to spend.
| Mode | Config | Behavior | Tradeoff |
|------|--------|----------|----------|
| Default (recommended) | sessionContext: false, sessionDigest: true | Agent calls 10 tools on demand. A short system hint stays on. Conversations captured + distilled at session end. | Lowest overhead. Agent decides when to search, usually smart, occasionally forgets. |
| Session context | sessionContext: true | Working Memory + relevant memories injected at prompt time, plus all 10 tools still available. | Higher per-turn context cost. Guarantees context is present from the first turn. |
| Minimal | sessionDigest: false | Tool-only, no automatic capture. | Small overhead from the always-on hint only. Use when you handle memory manually. |
Which mode should you use?
- Most users: start with default. The agent gets behavioral guidance nudging it to search before answering and save after deciding. It works well for most conversations.
- Short sessions or critical accuracy: enable
sessionContext. This guarantees relevant memories are present from the first turn. The agent doesn't need to decide whether to search. The tradeoff is a larger per-turn prompt. - Full manual control: set
sessionDigest: false. You control what gets saved (via/rememberornowledge_mem_save) and nothing is captured automatically.
Session Context (sessionContext, default: false)
When enabled, the plugin injects Working Memory and relevant search results at prompt time. The behavioral guidance automatically adjusts to tell the agent that context is already present, so memory_search should only be used for specific follow-up queries, not broad recall. This prevents the agent from redundantly searching for the same context that was just injected.
Useful for giving the agent immediate context without waiting for it to search proactively.
Session Digest (sessionDigest, default: true)
When enabled, two things happen at agent_end, after_compaction, and before_reset:
1. Thread capture (always). The full conversation is appended to a persistent thread. Unconditional, idempotent by message ID.
2. LLM distillation (when worthwhile). A lightweight LLM triage determines if the conversation has save-worthy content. If yes, a full distillation pass creates structured memories with types, labels, and temporal data. Language-agnostic, works in any language.
Design Decisions
Honest answers to common questions about how the memory system works.
Does the agent always search before answering?
The plugin uses two layers to drive recall. First, a behavioral skill (auto-discovered by OpenClaw) teaches the agent when and how to use memory tools. Second, a short always-on system hint reminds it to "search before answering questions about prior work, decisions, dates, people, preferences, or plans." In practice, modern LLMs follow this directive guidance reliably for knowledge-related questions. For messages that don't need past context (like "hello" or "thanks"), the agent skips the search, which is the right tradeoff. If guaranteed recall matters for your use case, enable sessionContext: true. That injects relevant memories at prompt time, before the agent even processes your message.
What stops the agent from saving duplicate memories?
Two layers. First, the plugin checks for near-identical existing memories before every save. If a memory with >=90% similarity already exists, the save is skipped and the existing memory is returned instead. Second, Nowledge Mem's Knowledge Agent runs in the background and handles deeper deduplication, identifying semantic overlap across memories and linking them via EVOLVES chains (replaces, enriches, confirms, or challenges). The plugin handles obvious duplicates; the Knowledge Agent handles subtle ones.
When sessionContext is on, will the agent search again and get duplicate context?
The behavioral guidance adjusts when sessionContext is enabled. Instead of "search with memory_search when prior context would help," it tells the agent "relevant memories have already been injected, use memory_search only for specific follow-up queries." This reduces redundant searches. The injected context and manual search results are in different XML blocks, so even if overlap occurs, it doesn't confuse the agent.
What happens to conversations I don't explicitly save?
With sessionDigest enabled (the default), every conversation is saved as a searchable thread. The boundary follows OpenClaw's own session lifecycle: one active chat becomes one Mem thread, /new or /reset starts a fresh thread, and internal compaction keeps appending to the same thread instead of forking a duplicate. Context Engine capture and hook-based capture still converge on the same conversation. Internal helper sessions like temp:* and subagent runs are excluded, so your recent threads stay focused on real chats. On top of thread capture, a lightweight LLM triage checks if the conversation contained decisions, insights, or preferences worth keeping as structured memories. If yes, they're extracted with proper types, labels, and temporal context. If the conversation was routine ("fix this typo"), nothing extra is saved.
Can memories be wrong or outdated?
Yes. Memories are what you or the AI saved at a point in time. Nowledge Mem's EVOLVES chains track how understanding changes: a newer memory can supersede, enrich, or challenge an older one. The Knowledge Agent identifies these relationships automatically. When you search, the relevance scoring considers recency, so newer memories rank higher by default.
Slash Commands
| Command | Description |
|---------|-------------|
| /remember <text> | Save a quick memory |
| /recall <query> | Search your knowledge base |
| /forget <id or query> | Delete a memory |
CLI Commands
openclaw nowledge-mem search "database optimization"
openclaw nowledge-mem statusConfiguration
No config is required for a normal npm install. The defaults work for local mode, and openclaw plugins install already enables the plugin and selects the memory slot.
To change settings, use the OpenClaw plugin settings UI. Changes take effect on restart.
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| sessionContext | boolean | false | Inject Working Memory + relevant memories at prompt time |
| sessionDigest | boolean | true | Thread capture + LLM distillation at session end |
| digestMinInterval | integer | 300 | Minimum seconds between session digests for the same thread (0-86400) |
| maxContextResults | integer | 5 | Max memories to inject at prompt time (1-20, only used when sessionContext is enabled) |
| recallMinScore | integer | 0 | Min relevance score (0-100%) to include in auto-recall. 0 = include all |
| maxThreadMessageChars | integer | 800 | Maximum characters preserved per captured OpenClaw thread message before truncation |
| corpusSupplement | boolean | false | Register as a searchable corpus inside memory-core's recall and dreaming pipeline |
| corpusMaxResults | integer | 5 | Max results per corpus supplement search (1-20) |
| corpusMinScore | integer | 0 | Min relevance score (0-100%) for supplement results. 0 = include all |
| dreaming | object | — | Optional OpenClaw-native dreaming config accepted when this plugin owns the memory slot. The host consumes it; this plugin only tolerates and preserves it |
| apiUrl | string | "" | Remote server URL. Empty = local (http://127.0.0.1:14242) |
| apiKey | string | "" | API key for remote access. Injected as NMEM_API_KEY env var, never logged |
Remote access
Configure this machine once:
nmem config client set url https://<your-url>
nmem config client set api-key nmem_...That writes the shared local client config used by nmem, OpenClaw, Bub, Claude Code, and other integrations on this machine. You can still set Server URL and API key in the OpenClaw dashboard if you want plugin-specific overrides.
See Access Mem Anywhere.
Advanced: environment variables
For CI, scripts, or Docker deployments, all options also support environment variables:
NMEM_SESSION_CONTEXT=true
NMEM_SESSION_DIGEST=true
NMEM_DIGEST_MIN_INTERVAL=300
NMEM_MAX_CONTEXT_RESULTS=5
NMEM_RECALL_MIN_SCORE=0
NMEM_MAX_THREAD_MESSAGE_CHARS=800
NMEM_CORPUS_SUPPLEMENT=false
NMEM_CORPUS_MAX_RESULTS=5
NMEM_CORPUS_MIN_SCORE=0
NMEM_API_URL=https://...
NMEM_API_KEY=your-keyPriority
Plugin-specific settings:
openclaw.json (legacy) > OpenClaw dashboard > env vars > defaultsCredentials (apiUrl, apiKey):
openclaw.json (legacy) > OpenClaw dashboard > config.json (shared) > env vars > defaults~/.nowledge-mem/openclaw.json is still honored for backward compatibility but is no longer the recommended path. New users should configure plugin-specific settings via the OpenClaw dashboard and shared credentials via nmem config client ....
Use nowledge_mem_status (or openclaw nowledge-mem status) to see where each value comes from.
Trust warning after install
If OpenClaw logs a warning like plugins.allow is empty; discovered non-bundled plugins may auto-load, the plugin is installed correctly. OpenClaw is warning that you have not pinned trust for non-bundled plugins yet. Add:
{
"plugins": {
"allow": ["openclaw-nowledge-mem"]
}
}If you also use linked or workspace copies, review plugins.load.paths before relying on an allowlist. OpenClaw allowlists plugin ids, not install provenance.
Troubleshooting
Only memory_search and memory_get show up — saves go to local markdown files
The memory slot is probably still set to the built-in memory-core. OpenClaw 3.22+ defaults to memory-core when no explicit slot is configured. Reinstall to reset the slot automatically:
openclaw plugins install clawhub:@nowledge/openclaw-nowledge-memOr set the slot manually in ~/.openclaw/openclaw.json:
{
"plugins": {
"slots": { "memory": "openclaw-nowledge-mem" }
}
}Restart OpenClaw after either change.
Plugin tools not available
All 10 plugin tools register automatically when the plugin loads. No tool-level config is needed — just make sure the plugin itself is trusted:
{
"plugins": {
"allow": ["openclaw-nowledge-mem"]
}
}corpusSupplement: true is set, but startup still logs corpus=false
Update to 0.8.15 or newer. Older builds could disable the plugin too early when memory-core owned the memory slot, which made supplement mode look unavailable even on new OpenClaw builds.
If openclaw status shows a CRITICAL warning about plugins.allow, this is the fix. Run nowledge_mem_status inside a conversation to check both plugin trust and memory slot status.
Do not list nowledge_mem_* tool names in tools.allow — OpenClaw silently strips allowlists that contain only plugin entries, so the config looks active but does nothing.
Memory tools still work, but thread auto-sync does not
This is usually a capture-path issue, not a search/save issue:
- memory tools use the local
nmemCLI - thread auto-sync uses the Mem HTTP API
- cron / isolated automation sessions are intentionally excluded from capture in
0.8.x
Run nowledge_mem_status in a conversation and check, in this order:
- the plugin is loaded
sessionDigestis stilltrue- the backend is reachable
- which capture path is active: hook events, or Context Engine
afterTurnwith hook fallback - whether you changed plugin settings recently without restarting OpenClaw yet
If your config enables plugins.slots.contextEngine: "nowledge-mem":
- on
0.8.6+, Context Engine capture stays active and hooks remain enabled as a safety net - on
0.8.5and earlier, temporarily removing thecontextEngineslot is a valid isolation step if thread sync stops while tools still work
Remember: OpenClaw applies plugin setting changes after restart. If you turned sessionDigest off earlier but had not restarted yet, thread sync could appear to keep working until the next restart, then stop.
What healthy OpenClaw thread sync looks like:
- one active OpenClaw chat becomes one Mem thread
- running
/newor/resetstarts a fresh Mem thread - compaction does not fork a second thread for the same chat
- helper sessions like
temp:slug-generatordo not appear - the synthetic
/new//resetstartup prompt is not stored as the first user message
To inspect the recent synced threads directly, run:
nmem t list --source openclaw -n 20Search timeouts with many concurrent agents
When running many agents in parallel, all searches share a single database connection. Upgrade to Nowledge Mem v0.6.12+ (backend) so scoring writes no longer block search responses.
nmem --json status fails in local mode
The Nowledge Mem backend is not running. Start it from the desktop app, or run the backend manually.
Remote mode not connecting
Run nmem config client show and confirm the effective URL and API key state. Then run nmem --json --api-url "$URL" status to verify the server is reachable.
What Makes This Different
- Local-first: no API key, no cloud account. Your knowledge stays on your machine.
- Knowledge graph: memories are connected nodes, not isolated vectors. EVOLVES edges track how understanding grows over time.
- Source provenance: the Library ingests PDFs, DOCX, URLs. Extracted knowledge links back to the exact document section it came from.
- Working Memory: an AI-generated daily briefing that evolves, not a static user profile.
- Cross-AI continuity: knowledge captured in any tool (Cursor, Claude, ChatGPT) flows to OpenClaw and back.
- Typed memories: 8 knowledge types mapped to graph node properties. Structured understanding, not text blobs.
- Multi-signal search: not just semantic similarity. Combines embedding, BM25 keyword, label match, graph & community signals, and recency/importance decay. See Search & Relevance.
License
MIT
