@cortexmem/cortex-bridge
v0.5.0
Published
Bridge plugin that connects [OpenClaw](https://github.com/openclaw) agents to [Cortex](https://github.com/rikouu/cortex) memory service.
Downloads
856
Readme
@cortexmem/cortex-bridge
Bridge plugin that connects OpenClaw agents to Cortex memory service.
Uses OpenClaw's standard register(api) plugin interface.
Install
openclaw plugins install @cortexmem/cortex-bridgeConfigure
Plugin config via OpenClaw settings or environment variables:
| Config Key | Env Variable | Default | Description |
|------------|-------------|---------|-------------|
| cortexUrl | CORTEX_URL | http://localhost:21100 | Cortex server URL |
| agentId | — | openclaw | Agent identifier for memory isolation |
| debug | CORTEX_DEBUG | false | Enable debug logging |
Tools
These tools are always available and work reliably:
| Tool | Description |
|------|-------------|
| cortex_recall | Search long-term memory for relevant past conversations, facts, and preferences |
| cortex_remember | Store a fact, preference, or decision (supports category: fact, preference, skill, identity, etc.) |
| cortex_ingest | Send a conversation pair for automatic LLM memory extraction |
| cortex_health | Check if the Cortex memory server is reachable (optional) |
Slash Command
/cortex-status— Quick check if Cortex server is online
Hooks
The plugin registers lifecycle hooks for automatic memory management:
| Hook | Status | Description |
|------|--------|-------------|
| before_agent_start | Working | Recalls relevant memories and injects as context before each response |
| agent_end | Not working | Should auto-ingest conversations after each response (see Known Issues) |
| before_compaction | Best-effort | Emergency flush before context compression |
Known Issues
agent_end hook not firing in streaming mode
Status: Upstream bug — openclaw/openclaw#21863
In streaming mode (used by Telegram and other gateway channels), the agent_end hook is not dispatched to plugins. The handleAgentEnd() function in OpenClaw's streaming event handler does not call hookRunner.runAgentEnd().
This means automatic conversation ingestion does not work in streaming mode. Memory recall (before_agent_start) works correctly.
Workarounds:
Use
cortex_ingesttool — Instruct your Agent (via system prompt) to callcortex_ingestafter meaningful conversations. Example system prompt addition:After each conversation, use the cortex_ingest tool to save the exchange for long-term memory. Pass the user's message and your response.Use non-streaming mode — If your setup supports it, use a non-streaming channel where
agent_endfires correctly.Use
cortex_remembertool — For specific facts or preferences, the Agent can callcortex_rememberdirectly during conversation.
License
MIT
