npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

lodis-mcp

v0.6.0

Published

Universal AI memory layer — MCP server with persistent, cross-tool memory for Claude Code, Cursor, Windsurf, and more

Readme

Lodis

npm version license

Universal, portable memory layer for AI agents.

Lodis gives your AI tools persistent, cross-tool memory backed by local SQLite. Any MCP-compatible tool — Claude Code, Cursor, Windsurf, Claude Desktop, Cline — connects to the same memory. Your agents remember what you tell them, learn from corrections, and build confidence over time.

Quick Start

Add to your MCP config and you're done:

{
  "mcpServers": {
    "lodis": {
      "command": "npx",
      "args": ["-y", "lodis-mcp"]
    }
  }
}

That's it. Your AI agent now has persistent memory.

Migrating from engrams? This project was published as engrams on npm prior to v0.6.0. The package was renamed to lodis-mcp — same code, same data directory (~/.lodis/), same MCP tools. To migrate, swap engrams for lodis-mcp in your MCP config ("args": ["-y", "lodis-mcp"]) and reinstall. The old engrams package on npm is frozen at v0.5.1 and will not receive further updates.

Getting Started

After installing, tell your AI assistant:

"Help me set up Lodis"

The assistant will scan your connected tools (calendar, email, GitHub), ask a few targeted questions, and seed 30-50 memories with entity types and connections. Review at localhost:3838.

Importing Existing Memories

  • Claude Code auto-memory: "Import my Claude memories into Lodis"
  • ChatGPT memory export: "Import this ChatGPT memory export into Lodis"
  • Cursor rules: "Import my .cursorrules as Lodis preferences"

What You Get

Lodis provides 29 MCP tools:

| Tool | Description | |------|-------------| | memory_search | Hybrid semantic + keyword search with filters | | memory_context | Token-budget-aware context retrieval | | memory_briefing | LLM-generated entity profile summaries | | memory_write | Create a new memory with dedup detection and permanence tiers | | memory_bulk_upload | Upload many memories at once (bypasses dedup) for imports from canonical external sources | | memory_update | Modify a memory's content, detail, or metadata | | memory_remove | Soft-delete a memory | | memory_remove_bulk | Soft-delete many memories at once, scoped by domain / entityName / ids[]. Defaults to dryRun. | | memory_confirm | Confirm a memory is correct (boosts confidence to 0.99) | | memory_correct | LLM-powered semantic diff correction | | memory_flag_mistake | Flag a memory as incorrect (degrades confidence) | | memory_pin | Pin as canonical (decay-immune, high confidence) | | memory_archive | Archive for reference (deprioritize, freeze confidence) | | memory_connect | Create typed relationships between memories | | memory_get_connections | View a memory's relationship graph | | memory_split | Break compound memories into atomic units | | memory_scrub | Detect and redact PII or secrets from memory content | | memory_list | Browse memories by domain, sorted by confidence or recency | | memory_list_domains | List all memory domains with counts | | memory_list_entities | List extracted entities grouped by type | | memory_classify | Batch-classify untyped memories using entity extraction | | memory_set_permissions | Configure per-agent read/write access | | memory_onboard | Guided onboarding: scan tools, interview, seed memories | | memory_interview | Agent-driven cleanup and gap-fill | | memory_import | Import from Claude, ChatGPT, Cursor, gitconfig, or plaintext | | memory_export | Export memories as portable JSON | | memory_index | Index external docs (Drive, Notion, filesystem) | | memory_index_status | Check staleness of indexed documents | | memory_migrate | Migrate local memories to cloud (Pro tier) |

Key features

  • Hybrid search — FTS5 full-text + vector embeddings (all-MiniLM-L6-v2, local) merged via Reciprocal Rank Fusion
  • Entity types — Memories auto-classified into 13 types: person, organization, place, project, preference, event, goal, fact, lesson, routine, skill, resource, decision
  • Knowledge graph — Typed relationships between memories, auto-connected entities
  • Memory permanence — Four tiers (canonical, active, ephemeral, archived) control confidence decay and search ranking
  • Context-packed search — Token-budget-aware retrieval via memory_context for efficient LLM context windows
  • Entity profiles — LLM-generated summaries of known entities via memory_briefing with 24h cache
  • Confidence scoring — 0-1 scale based on confirmations, corrections, mistakes, usage, and time decay
  • Dedup on write — Similar memories detected and surfaced to the agent for resolution
  • Document indexing — Index external documents (Drive, Notion, filesystem) for unified search
  • PII detection — Regex-based pattern detection with memory_scrub for redaction
  • Source attribution — Every memory tracks which agent learned it and how

MCP Config Examples

Claude Code

In ~/.claude.json or your project's .mcp.json:

{
  "mcpServers": {
    "lodis": {
      "command": "npx",
      "args": ["-y", "lodis-mcp"]
    }
  }
}

Claude Desktop

In ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "lodis": {
      "command": "npx",
      "args": ["-y", "lodis-mcp"]
    }
  }
}

Cursor

In .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "lodis": {
      "command": "npx",
      "args": ["-y", "lodis-mcp"]
    }
  }
}

Windsurf

In ~/.windsurf/mcp.json:

{
  "mcpServers": {
    "lodis": {
      "command": "npx",
      "args": ["-y", "lodis-mcp"]
    }
  }
}

How It Works

  • Local-first: All data stored in ~/.lodis/lodis.db (SQLite). No accounts, no cloud, no API keys for core functionality.
  • Hybrid search: FTS5 keyword search + sqlite-vec vector embeddings, merged with Reciprocal Rank Fusion (k=60). Confidence-weighted scoring and recency boost.
  • Embeddings: all-MiniLM-L6-v2 via Transformers.js — runs locally, no API calls, no cost. ~22MB model cached on first search.
  • Confidence scoring: Memories start with confidence based on source type (stated: 90%, observed: 75%, inferred: 65%). Confirmations boost to 99%, corrections reset, mistakes degrade. Unused memories decay over time.
  • Entity extraction: Memories auto-classified into 13 entity types with structured data. Connections auto-created between related entities.
  • Source attribution: Every memory tracks which agent wrote it and how it was acquired.
  • Audit trail: All changes logged in an event timeline.
  • Cross-tool: Every MCP-compatible tool shares the same memory database.

LLM Provider (optional)

Entity extraction, correction, and splitting use an LLM. Bring your own API key:

Anthropic (auto-detected)

export ANTHROPIC_API_KEY=sk-ant-...

OpenAI

export OPENAI_API_KEY=sk-...
export LODIS_LLM_PROVIDER=openai

Ollama (local, free)

ollama pull llama3.2
export LODIS_LLM_PROVIDER=ollama

Or configure via ~/.lodis/config.json for per-task model routing. No LLM? Core features (search, store, connect) work without one.

Web Dashboard

Lodis includes a web dashboard for browsing and managing your memories visually. Clone the repo to use it:

git clone https://github.com/Sunrise-Labs-Dot-AI/lodis.git
cd lodis && pnpm install && pnpm build

# Start the dashboard
cd packages/dashboard && pnpm dev

Open localhost:3838 to browse memories, search, confirm, correct, manage agent permissions, explore the knowledge graph, view entity profiles, browse archived memories, and run cleanup operations.

Contributing

Contributions welcome! Please open an issue or pull request on GitHub.

License

MIT