npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

memory-lancedb-context

v1.7.0

Published

ContextEngine plugin for OpenClaw with retrieval-augmented context management and memory-aware compaction

Downloads

524

Readme

Memory LanceDB Context Engine

npm version License: MIT

A smart ContextEngine plugin for OpenClaw that integrates with memory-lancedb-pro for retrieval-augmented context management, auto-capture, intelligent memory injection, batch import, and memory-aware compaction.

Features

🧠 Intelligent Memory Injection

  • Automatically retrieves relevant memories during context assembly
  • Injects historical context into system prompt with relevance scores
  • Supports hybrid search (vector + BM25 + reranking)

📝 Auto-Capture

  • Detects important user messages using pattern matching
  • Supports Chinese, English, and Czech trigger phrases
  • Categories: preference, decision, fact, entity, other

📦 Batch Import (ingestBatch)

  • Import multiple messages at once
  • Configurable batch size (default: 100)
  • Automatic duplicate detection
  • Progress tracking with detailed results

🔄 Historical Session Bootstrap

  • Import historical sessions into memory on startup
  • Archive file support (.archive, .bak, history-*)
  • Preserves context across sessions

🗜️ Custom Compaction Strategies

| Strategy | Description | |----------|-------------| | aggressive | Maximum compression, minimal preservation | | balanced | Default, good balance between compression and context | | conservative | Minimal compression, maximum preservation | | custom | User-defined instructions |

⚡ Smart Summarization

  • Auto-generates summaries from recent messages
  • Extracts decisions, facts, entities, preferences
  • Stores summaries for future context recovery

Installation

# Install via npm
npm install memory-lancedb-context

# Or clone to your OpenClaw plugins directory
git clone https://github.com/2951461586/memory-lancedb-context.git
cd memory-lancedb-context
npm install

Configuration

Add to your openclaw.json:

{
  "plugins": {
    "slots": {
      "memory": "memory-lancedb-pro",
      "contextEngine": "memory-lancedb-context"
    },
    "entries": {
      "memory-lancedb-pro": {
        "enabled": true,
        "config": {
          "embedding": {
            "apiKey": "your-api-key",
            "model": "text-embedding-3-small"
          }
        }
      },
      "memory-lancedb-context": {
        "enabled": true,
        "config": {
          "autoCapture": true,
          "enableMemoryInjection": true,
          "maxMemoriesToInject": 5,
          "minInjectionScore": 0.4,
          "compactionStrategy": "balanced",
          "enableDuplicateDetection": true,
          "duplicateThreshold": 0.92,
          "enableSmartSummarization": true,
          "maxBatchSize": 100
        }
      }
    }
  }
}

Configuration Options

| Option | Type | Default | Description | |--------|------|---------|-------------| | autoCapture | boolean | true | Auto-capture user messages that match memory triggers | | enableMemoryInjection | boolean | true | Inject relevant memories into context during assemble | | maxMemoriesToInject | integer | 5 | Maximum number of memories to inject (1-10) | | minInjectionScore | number | 0.4 | Minimum relevance score for memory injection (0-1) | | defaultAgentId | string | "main" | Default agent ID for scope resolution | | enableMemoryCompaction | boolean | true | Enable memory-based intelligent compaction | | preserveImportantTurns | boolean | true | Store important turns to memory during compaction | | maxPreservedTurns | integer | 20 | Maximum recent turns to analyze (5-50) | | compactionStrategy | string | "balanced" | Compaction strategy (aggressive/balanced/conservative/custom) | | customCompactionInstructions | string | "" | Custom instructions for custom strategy | | enableMemoryDecay | boolean | false | Enable memory decay (future feature) | | memoryDecayDays | integer | 30 | Memory decay threshold in days | | enableDuplicateDetection | boolean | true | Enable duplicate detection | | duplicateThreshold | number | 0.92 | Similarity threshold for duplicates (0.8-1) | | enableSmartSummarization | boolean | true | Enable smart summarization | | maxBatchSize | integer | 100 | Maximum batch size for ingestBatch (10-500) |

API

ingestBatch(params)

Batch import messages into memory.

const result = await contextEngine.ingestBatch({
  messages: [
    { role: "user", content: "I prefer American coffee without sugar" },
    { role: "user", content: "Project codename is Phoenix" },
  ],
  scope: "agent:main",  // optional
  skipDuplicates: true,  // default: true
});

// Result:
{
  total: 2,
  ingested: 2,
  skipped: 0,
  failed: 0,
  details: [
    { text: "I prefer American coffee...", status: "ingested" },
    { text: "Project codename is Phoenix", status: "ingested" },
  ]
}

bootstrap(params)

Import historical session on startup.

const result = await contextEngine.bootstrap({
  sessionId: "session-123",
  sessionFile: "/path/to/session.json",
});

// Result:
{
  bootstrapped: true,
  reason: "Imported 15 messages from session, 5 from archives",
  data: {
    totalMessages: 50,
    ingested: 15,
    skipped: 35,
    failed: 0,
    importedArchives: 5
  }
}

Compaction Strategies

// Aggressive - maximum compression
{
  "compactionStrategy": "aggressive"
}

// Balanced - default
{
  "compactionStrategy": "balanced"
}

// Conservative - minimal compression
{
  "compactionStrategy": "conservative"
}

// Custom - user-defined
{
  "compactionStrategy": "custom",
  "customCompactionInstructions": "Focus on technical decisions. Preserve all code snippets."
}

How It Works

┌─────────────────────────────────────────────────────────────┐
│                     User sends message                       │
└─────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────┐
│  ingest() → Detect triggers → Auto-store important info     │
└─────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────┐
│  assemble() → Retrieve memories → Inject into system prompt │
│  <relevant-memories>                                        │
│  [HISTORICAL CONTEXT]                                       │
│  - [preference:agent:violet] User likes Americano...        │
│  - [decision:agent:violet] Project codename Phoenix...      │
│  </relevant-memories>                                       │
└─────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────┐
│                    Model generates response                  │
└─────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────┐
│  afterTurn() → Smart summary → Store for future context     │
└─────────────────────────────────────────────────────────────┘
                              ↓
┌─────────────────────────────────────────────────────────────┐
│  compact() → Strategy-based compression → Memory preserved  │
└─────────────────────────────────────────────────────────────┘

Memory Triggers

The plugin automatically captures messages containing:

English

  • "remember", "prefer", "I like/hate/want"
  • "we decided", "switch to", "migrate to"
  • "important", "always", "never"
  • Phone numbers, email addresses

Chinese

  • "记住", "记一下", "别忘了"
  • "偏好", "喜欢", "讨厌"
  • "决定", "改用", "以后用"
  • "我的...是", "重要", "关键"

Czech

  • "zapamatuj si", "preferuji", "radši"
  • "rozhodli jsme", "budeme používat"

Comparison with Legacy ContextEngine

| Feature | Legacy | Memory-LanceDB-Context | |---------|--------|------------------------| | ingest | ❌ no-op | ✅ Auto-capture important messages | | ingestBatch | ❌ N/A | ✅ Batch import with duplicate detection | | assemble | ❌ pass-through | ✅ Retrieve & inject memories | | bootstrap | ❌ no-op | ✅ Historical session import | | compact | ✅ LLM summarization | ✅ Strategy-based + memory preservation | | afterTurn | ❌ no-op | ✅ Smart summarization + storage |

Dependencies

Development

# Clone the repository
git clone https://github.com/2951461586/memory-lancedb-context.git
cd memory-lancedb-context

# Install dependencies
npm install

# Test
npm test

License

MIT License - see LICENSE for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'feat: Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Acknowledgments