npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

rabisai-memory

v0.3.0

Published

AI memory plugin for OpenClaw — embedded LanceDB vector search with confidence scoring, contradiction detection, multi-signal ranking, and user verification loop

Readme


Why rabisai-memory?

Most AI memory solutions are cloud-hosted, black-box, or lack verification. rabisai-memory is different:

  • 100% local — LanceDB (Rust) + SQLite + Ollama. Your data never leaves your machine.
  • User verification loop — memories start with low confidence, only becoming trusted after user confirmation.
  • Contradiction-aware — detects when new facts conflict with old ones and supersedes automatically.
  • Multi-signal ranking — not just semantic similarity. Combines recency, confidence, and priority for smarter recall.
  • Zero config — install, and it works. Each agent gets its own namespace automatically.

| | rabisai-memory | Mem0 | MemGPT/Letta | |---|---|---|---| | Runs locally | Yes (embedded) | Cloud API | Self-host required | | User verification | Built-in | No | No | | Contradiction detection | VI/EN bilingual | No | Basic | | Confidence scoring | 4-signal ranking | Single score | No | | OpenClaw native | Plugin (1-cmd install) | SDK integration | Separate server | | Privacy | Data on your machine | Cloud storage | Depends on setup |

Quick start

# 1. Install Ollama (if not already)
curl -fsSL https://ollama.com/install.sh | sh

# 2. Install the plugin
openclaw plugins install rabisai-memory

# 3. Done — start chatting with your agent
openclaw agent --agent main -m "My favorite color is blue"

The plugin auto-pulls the embedding model on first install.

What happens behind the scenes

You: "My favorite color is blue"
                    ↓
            [agent_end hook]
                    ↓
    ┌───────────────────────────────┐
    │ 1. Extract: "favorite color   │
    │    is blue" (type: preference) │
    │ 2. Embed via Ollama (1024-dim) │
    │ 3. Check duplicates (SHA-256)  │
    │ 4. Store in LanceDB + SQLite   │
    │    confidence: 0.6             │
    └───────────────────────────────┘

Next conversation:
You: "What's my favorite color?"
                    ↓
         [before_agent_start hook]
                    ↓
    ┌───────────────────────────────┐
    │ 1. Embed query                │
    │ 2. Vector search (5-20ms)     │
    │ 3. Rank: semantic 50%         │
    │         + recency 20%         │
    │         + confidence 15%      │
    │         + priority 15%        │
    │ 4. Inject top-5 into context  │
    └───────────────────────────────┘
                    ↓
Agent: "Your favorite color is blue"

Later:
You: "Actually, my favorite color is green"
                    ↓
    ┌───────────────────────────────┐
    │ Contradiction detected!       │
    │ "blue" → superseded by "green"│
    │ Old memory: confidence → 0.0  │
    │ New memory: confidence → 0.8  │
    │ Relation: green supersedes    │
    │           blue                │
    └───────────────────────────────┘

Features

| Feature | Description | |---------|-------------| | Vector search | LanceDB embedded (Rust) — 5-20ms search, no server | | Confidence scoring | 0.0-1.0 per memory, user confirmation boosts to 0.95 | | Contradiction detection | Pattern matching (VI/EN) + L2 distance + entity overlap | | Memory types | 6 types auto-detected: fact, preference, todo, context, decision, instruction | | TTL expiry | Ephemeral types auto-expire (todo=30d, context=7d, decision=90d) | | Ebbinghaus decay | 5%/day for ephemeral, recall boosts confidence (+10%) | | Multi-signal ranking | Semantic + recency + confidence + priority | | Auto-consolidation | Near-duplicate memories merge automatically | | Memory relations | related_to, supersedes, evolved_from — 1-hop enrichment on recall | | Episodes | Temporal conversation grouping (30min gap detection) | | Sensitive filter | Blocks API keys, passwords, SSNs; warns on emails | | Vietnamese support | Bilingual regex for all detectors, phrase normalizer for embeddings | | Namespace isolation | Each agent ID = separate LanceDB table, zero config | | 17 agent tools | Search, add, correct, prune, recap, consolidate, and more |

Performance

| Operation | Latency | Notes | |-----------|---------|-------| | Vector search | 5-20ms | LanceDB in-process, no network | | SQLite lookup | <1ms | Metadata sidecar | | Embedding (per text) | ~50ms | Ollama local, qwen3-embedding:0.6b | | Full recall (cold) | ~200ms | Search + rank + format | | Memory injection | ~2.6KB | Identity + top-5 ranked memories |

Supported platforms

LanceDB uses native Rust binaries via NAPI. Supported:

  • macOS — Apple Silicon (arm64) + Intel (x64)
  • Linux — x64, arm64 (glibc >= 2.17)
  • Windows — x64

Prerequisites

  • Node.js >= 20
  • Ollama — local embedding server
  • OpenClaw — AI agent gateway

Installation

openclaw plugins install rabisai-memory

The postinstall script auto-pulls the embedding model (qwen3-embedding:0.6b). No config needed.

From source

git clone https://github.com/xthanhn91/rabisai-memory.git
cd rabisai-memory
npm install

Add to openclaw.json:

{
  "plugins": {
    "load": {
      "paths": ["path/to/rabisai-memory"]
    }
  }
}

Configuration

All config is optional. Override in openclaw.json:

{
  "plugins": {
    "entries": {
      "rabisai-memory": {
        "config": {
          "ollamaUrl": "http://127.0.0.1:11434",
          "embeddingModel": "qwen3-embedding:0.6b",
          "embeddingDimensions": 1024,
          "topK": 5
        }
      }
    }
  }
}

| Option | Default | Description | |--------|---------|-------------| | ollamaUrl | http://127.0.0.1:11434 | Ollama API URL. Use 127.0.0.1, not localhost (IPv6 issues) | | embeddingModel | qwen3-embedding:0.6b | Ollama model for embeddings (1024-dim) | | embeddingDimensions | 1024 | Must match your model's output dimensions | | lanceDbPath | ./lancedb-data | Where vector data is stored | | autoRecall | true | Inject memories before each conversation | | autoCapture | true | Capture new info after each conversation | | topK | 5 | Number of memories recalled per conversation | | defaultNamespace | "default" | Fallback when agent ID is unavailable |

Note: Config in openclaw.json overrides code defaults. If you change the embedding model, also update embeddingDimensions — the plugin will auto-migrate existing vectors on next startup.

Architecture

┌──────────────────────────────────────────────┐
│              OpenClaw Gateway                │
│                                              │
│  ┌─────────┐   hooks    ┌────────────────┐  │
│  │  Agent   │──────────→│ rabisai-memory  │  │
│  │          │←──────────│                 │  │
│  └─────────┘  memories  │  ┌───────────┐ │  │
│                          │  │  LanceDB  │ │  │
│                          │  │  (Rust)   │ │  │
│                          │  └───────────┘ │  │
│                          │  ┌───────────┐ │  │
│                          │  │  SQLite   │ │  │
│                          │  │ (sidecar) │ │  │
│                          │  └───────────┘ │  │
│                          └────────────────┘  │
│                                 ↕            │
│                          ┌────────────┐      │
│                          │   Ollama   │      │
│                          │  (local)   │      │
│                          └────────────┘      │
└──────────────────────────────────────────────┘

Memory lifecycle

  1. Captureagent_end hook extracts facts → embed via Ollama → store in LanceDB + SQLite
  2. Dedup — SHA-256 content hash + near-duplicate consolidation (L2 < 0.15)
  3. Rank — 4-factor scoring: semantic (0.50) + recency (0.20) + confidence (0.15) + priority (0.15)
  4. Recallbefore_agent_start hook searches → ranks → injects top-K into context
  5. Decay — Ebbinghaus 5%/day for ephemeral types; recall boosts confidence (+10%)
  6. Prune — Auto-delete stale (confidence < 0.05, age > 30d) + archive unused (0 recalls, > 60d)
  7. Supersede — Contradiction detector marks old facts, links via memory_relations

Namespace isolation

Each agent gets its own memory namespace automatically — agentId is the namespace:

  • Agent main → table memories_main
  • Agent peter → table memories_peter
  • Fresh tables auto-created on first use

Tools (17)

| Tool | Description | |------|-------------| | memory_search | Search memories by text query | | memory_add | Manually store a memory | | memory_reset | Clear all memories in current namespace | | memory_status_update | Update memory confidence or metadata | | memory_fact_correct | Explicit fact correction (confidence=0.95) | | memory_false_negatives | Report missed recalls for self-healing | | memory_prune | Manually trigger stale memory cleanup | | memory_extend_ttl | Extend expiry for important ephemeral memories | | memory_session_recap | Session summary grouped by category + priority | | memory_consolidate | Trigger near-duplicate merge | | memory_relations_view | View memory relation graph (1-hop) | | memory_suggest_capture | Detect unstored facts from current context | | memory_episode_search | Search within temporal episodes | | memory_correction_stats | View self-healing correction patterns | | memory_share | Share memories across namespaces | | memory_code_search | Search code-related memories (file paths, functions) | | memory_workflow_suggest | Suggest workflows based on past patterns |

Testing

# Unit tests (376 tests, mocked Ollama/LanceDB)
npm test

# Smoke test (requires Ollama running, 19 checks)
npm run test:smoke

# Full gateway integration (requires OpenClaw running)
npm run test:chat

Troubleshooting

Ollama is not running.

ollama serve
ollama pull qwen3-embedding:0.6b

Node.js may resolve localhost to ::1. Use http://127.0.0.1:11434 instead (already the default).

Embedding dimensions mismatch. Update both embeddingModel and embeddingDimensions in openclaw.json. The plugin auto-migrates on next startup.

OpenClaw may bundle extensions/memory-lancedb/ that conflicts. Move or delete ~/.openclaw/extensions/memory-lancedb/.

Contributing

Contributions welcome! Please:

  1. Fork the repo
  2. Create a feature branch (git checkout -b feat/my-feature)
  3. Run tests (npm test) — all 376 must pass
  4. Submit a PR

License

MIT