memory-lancedb-context
v1.7.0
Published
ContextEngine plugin for OpenClaw with retrieval-augmented context management and memory-aware compaction
Downloads
524
Maintainers
Readme
Memory LanceDB Context Engine
A smart ContextEngine plugin for OpenClaw that integrates with memory-lancedb-pro for retrieval-augmented context management, auto-capture, intelligent memory injection, batch import, and memory-aware compaction.
Features
🧠 Intelligent Memory Injection
- Automatically retrieves relevant memories during context assembly
- Injects historical context into system prompt with relevance scores
- Supports hybrid search (vector + BM25 + reranking)
📝 Auto-Capture
- Detects important user messages using pattern matching
- Supports Chinese, English, and Czech trigger phrases
- Categories: preference, decision, fact, entity, other
📦 Batch Import (ingestBatch)
- Import multiple messages at once
- Configurable batch size (default: 100)
- Automatic duplicate detection
- Progress tracking with detailed results
🔄 Historical Session Bootstrap
- Import historical sessions into memory on startup
- Archive file support (.archive, .bak, history-*)
- Preserves context across sessions
🗜️ Custom Compaction Strategies
| Strategy | Description | |----------|-------------| | aggressive | Maximum compression, minimal preservation | | balanced | Default, good balance between compression and context | | conservative | Minimal compression, maximum preservation | | custom | User-defined instructions |
⚡ Smart Summarization
- Auto-generates summaries from recent messages
- Extracts decisions, facts, entities, preferences
- Stores summaries for future context recovery
Installation
# Install via npm
npm install memory-lancedb-context
# Or clone to your OpenClaw plugins directory
git clone https://github.com/2951461586/memory-lancedb-context.git
cd memory-lancedb-context
npm installConfiguration
Add to your openclaw.json:
{
"plugins": {
"slots": {
"memory": "memory-lancedb-pro",
"contextEngine": "memory-lancedb-context"
},
"entries": {
"memory-lancedb-pro": {
"enabled": true,
"config": {
"embedding": {
"apiKey": "your-api-key",
"model": "text-embedding-3-small"
}
}
},
"memory-lancedb-context": {
"enabled": true,
"config": {
"autoCapture": true,
"enableMemoryInjection": true,
"maxMemoriesToInject": 5,
"minInjectionScore": 0.4,
"compactionStrategy": "balanced",
"enableDuplicateDetection": true,
"duplicateThreshold": 0.92,
"enableSmartSummarization": true,
"maxBatchSize": 100
}
}
}
}
}Configuration Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| autoCapture | boolean | true | Auto-capture user messages that match memory triggers |
| enableMemoryInjection | boolean | true | Inject relevant memories into context during assemble |
| maxMemoriesToInject | integer | 5 | Maximum number of memories to inject (1-10) |
| minInjectionScore | number | 0.4 | Minimum relevance score for memory injection (0-1) |
| defaultAgentId | string | "main" | Default agent ID for scope resolution |
| enableMemoryCompaction | boolean | true | Enable memory-based intelligent compaction |
| preserveImportantTurns | boolean | true | Store important turns to memory during compaction |
| maxPreservedTurns | integer | 20 | Maximum recent turns to analyze (5-50) |
| compactionStrategy | string | "balanced" | Compaction strategy (aggressive/balanced/conservative/custom) |
| customCompactionInstructions | string | "" | Custom instructions for custom strategy |
| enableMemoryDecay | boolean | false | Enable memory decay (future feature) |
| memoryDecayDays | integer | 30 | Memory decay threshold in days |
| enableDuplicateDetection | boolean | true | Enable duplicate detection |
| duplicateThreshold | number | 0.92 | Similarity threshold for duplicates (0.8-1) |
| enableSmartSummarization | boolean | true | Enable smart summarization |
| maxBatchSize | integer | 100 | Maximum batch size for ingestBatch (10-500) |
API
ingestBatch(params)
Batch import messages into memory.
const result = await contextEngine.ingestBatch({
messages: [
{ role: "user", content: "I prefer American coffee without sugar" },
{ role: "user", content: "Project codename is Phoenix" },
],
scope: "agent:main", // optional
skipDuplicates: true, // default: true
});
// Result:
{
total: 2,
ingested: 2,
skipped: 0,
failed: 0,
details: [
{ text: "I prefer American coffee...", status: "ingested" },
{ text: "Project codename is Phoenix", status: "ingested" },
]
}bootstrap(params)
Import historical session on startup.
const result = await contextEngine.bootstrap({
sessionId: "session-123",
sessionFile: "/path/to/session.json",
});
// Result:
{
bootstrapped: true,
reason: "Imported 15 messages from session, 5 from archives",
data: {
totalMessages: 50,
ingested: 15,
skipped: 35,
failed: 0,
importedArchives: 5
}
}Compaction Strategies
// Aggressive - maximum compression
{
"compactionStrategy": "aggressive"
}
// Balanced - default
{
"compactionStrategy": "balanced"
}
// Conservative - minimal compression
{
"compactionStrategy": "conservative"
}
// Custom - user-defined
{
"compactionStrategy": "custom",
"customCompactionInstructions": "Focus on technical decisions. Preserve all code snippets."
}How It Works
┌─────────────────────────────────────────────────────────────┐
│ User sends message │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ ingest() → Detect triggers → Auto-store important info │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ assemble() → Retrieve memories → Inject into system prompt │
│ <relevant-memories> │
│ [HISTORICAL CONTEXT] │
│ - [preference:agent:violet] User likes Americano... │
│ - [decision:agent:violet] Project codename Phoenix... │
│ </relevant-memories> │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ Model generates response │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ afterTurn() → Smart summary → Store for future context │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ compact() → Strategy-based compression → Memory preserved │
└─────────────────────────────────────────────────────────────┘Memory Triggers
The plugin automatically captures messages containing:
English
- "remember", "prefer", "I like/hate/want"
- "we decided", "switch to", "migrate to"
- "important", "always", "never"
- Phone numbers, email addresses
Chinese
- "记住", "记一下", "别忘了"
- "偏好", "喜欢", "讨厌"
- "决定", "改用", "以后用"
- "我的...是", "重要", "关键"
Czech
- "zapamatuj si", "preferuji", "radši"
- "rozhodli jsme", "budeme používat"
Comparison with Legacy ContextEngine
| Feature | Legacy | Memory-LanceDB-Context |
|---------|--------|------------------------|
| ingest | ❌ no-op | ✅ Auto-capture important messages |
| ingestBatch | ❌ N/A | ✅ Batch import with duplicate detection |
| assemble | ❌ pass-through | ✅ Retrieve & inject memories |
| bootstrap | ❌ no-op | ✅ Historical session import |
| compact | ✅ LLM summarization | ✅ Strategy-based + memory preservation |
| afterTurn | ❌ no-op | ✅ Smart summarization + storage |
Dependencies
- OpenClaw >= 2026.3.7
- memory-lancedb-pro - Required for memory storage
Development
# Clone the repository
git clone https://github.com/2951461586/memory-lancedb-context.git
cd memory-lancedb-context
# Install dependencies
npm install
# Test
npm testLicense
MIT License - see LICENSE for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'feat: Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Acknowledgments
- OpenClaw - The AI agent framework this plugin is designed for
- LanceDB - Serverless vector database
- memory-lancedb-pro - The memory storage backend
