npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

neuromcp

v0.7.5

Published

Semantic memory for AI agents — local-first MCP server with hybrid search, governance, and consolidation

Readme

neuromcp

Semantic memory for AI agents — local-first MCP server with hybrid search, compiled wiki knowledge, and crash-resilient session persistence.

npm version license

npx neuromcp

Why

AI agents forget everything between sessions. Existing solutions either store flat key-value pairs (useless for real knowledge) or require cloud infrastructure and API keys.

neuromcp gives you two layers of memory:

  1. MCP Server — hybrid search (vector + full-text), memory governance, automatic consolidation, all in a single SQLite file
  2. Wiki Knowledge Base (v0.5) — compiled Markdown knowledge that survives crashes, compounds over sessions, and gives your agent project-aware context at every startup

Inspired by Karpathy's LLM Wiki, Mastra's Observational Memory, and Zep's temporal knowledge graphs — but simpler than all of them. No vector DB, no embeddings pipeline, no cloud. Just Markdown files + Git + hooks.

Architecture

~/.neuromcp/
├── memory.db               ← SQLite: hybrid search, MCP tools
├── wiki/                   ← Compiled knowledge (git-tracked)
│   ├── index.md            ← Routekaart — LLM reads this FIRST
│   ├── schema.md           ← Operating rules for the LLM
│   ├── log.md              ← Append-only changelog
│   ├── people/             ← User profiles, preferences
│   ├── projects/           ← Project knowledge (stack, auth, URLs)
│   ├── systems/            ← Infrastructure (tools, MCP servers)
│   ├── patterns/           ← Reusable patterns (error fixes, routing)
│   ├── decisions/          ← Architecture decisions with context
│   └── skills/             ← Repeatable procedures
└── raw/sessions/           ← Raw session logs (auto-generated)

How the wiki works

| When | What happens | |------|-------------| | Session start | Hook injects index.md + user profile + auto-detected project page (~1300 tokens) | | During session | LLM updates wiki pages when learning something persistent | | Every 8 tool calls | Hook reminds LLM to update the wiki | | Session end | Hook writes raw session log + git auto-commits all wiki changes | | Crash | Checkpoint every 5 tool calls to file. Git history for rollback. |

What the LLM knows at session start

Schema (operating rules) → How to maintain the wiki
Index (knowledge map)    → What knowledge exists
User profile             → Who you are, how you work
Project page             → Current project details (auto-detected from cwd)
Last session             → What happened last time

Quick Start

1. Start the MCP server

npx neuromcp

2. Initialize the wiki (optional but recommended)

npx neuromcp-init-wiki

This creates the wiki structure, copies hook scripts, and initializes git. Follow the printed instructions to add hooks to your Claude Code settings.

Recommended: Add Ollama for real semantic search

ollama pull nomic-embed-text

neuromcp auto-detects it. No config needed.

Installation

Claude Code

// ~/.claude.json → mcpServers
{
  "neuromcp": {
    "type": "stdio",
    "command": "npx",
    "args": ["-y", "neuromcp"]
  }
}

Claude Desktop

// ~/Library/Application Support/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "neuromcp": {
      "command": "npx",
      "args": ["-y", "neuromcp"]
    }
  }
}

Cursor / Windsurf / Cline

Same format — add to your editor's MCP settings.

Per-project isolation

// .mcp.json in project root
{
  "mcpServers": {
    "neuromcp": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "neuromcp"],
      "env": {
        "NEUROMCP_DB_PATH": ".neuromcp/memory.db",
        "NEUROMCP_NAMESPACE": "my-project"
      }
    }
  }
}

MCP Surface

Tools (8)

| Tool | Description | |------|-------------| | store_memory | Store with semantic dedup. Returns ID and match status. | | search_memory | Hybrid vector + FTS search with RRF ranking. Filters by namespace, category, tags, trust, date. | | recall_memory | Retrieve by ID, namespace, category, or tags — no semantic search. | | forget_memory | Soft-delete (tombstone). Supports dry_run. | | consolidate | Dedup, decay, prune, sweep. commit=false for preview, true to apply. | | memory_stats | Counts, categories, trust distribution, DB size. | | export_memories | Export as JSONL or JSON. | | import_memories | Import with content-hash dedup. |

Resources (13)

| URI | Description | |-----|-------------| | memory://stats | Global statistics | | memory://recent | Last 20 memories | | memory://namespaces | All namespaces with counts | | memory://health | Server health + metrics | | memory://stats/{namespace} | Per-namespace stats | | memory://recent/{namespace} | Recent in namespace | | memory://id/{id} | Single memory by ID | | memory://tag/{tag} | Memories by tag | | memory://namespace/{ns} | All in namespace | | memory://consolidation/log | Recent consolidation entries | | memory://operations | Active/recent operations |

Prompts (3)

| Prompt | Description | |--------|-------------| | memory_context_for_task | Search relevant memories and format as LLM context | | review_memory_candidate | Show proposed memory alongside near-duplicates | | consolidation_dry_run | Preview consolidation without applying |

Wiki Knowledge Base

The wiki is the compiled, human-readable knowledge layer. It replaces the chaos of session logs with structured, interlinked Markdown pages.

Why a wiki instead of more vector search?

| Traditional RAG | neuromcp Wiki | |----------------|---------------| | Re-derives answers every query | Knowledge compiled once, refined over time | | Chunking artifacts, retrieval noise | Human-readable pages with source citations | | Vector DB, embedding pipeline | Plain Markdown + Git | | Black box retrieval | Auditable, editable, portable | | Knowledge evaporates | Knowledge compounds |

Wiki page format

---
title: My Project
type: project
created: 2026-04-06
updated: 2026-04-06
confidence: high
related: [other-project, oauth-setup]
---

# My Project

Description, stack, auth, deployment details...

How to use

The wiki works automatically once hooks are installed. The LLM:

  1. Reads index.md at session start to know what knowledge exists
  2. Reads specific pages when relevant to the current task
  3. Updates pages when learning something new
  4. Gets reminded every 8 tool calls if the wiki needs updating

You can also browse and edit the wiki manually — it's just Markdown files.

Memory Governance

Namespaces isolate memories by project, agent, or domain.

Trust levels (high, medium, low, unverified) rank search results and control decay resistance.

Soft delete tombstones memories — recoverable for 30 days.

Content hashing (SHA-256) deduplicates at write time.

Lineage tracking records source, project ID, and agent ID per memory.

Configuration

All via environment variables. Defaults work for most setups.

| Variable | Default | Description | |----------|---------|-------------| | NEUROMCP_DB_PATH | ~/.neuromcp/memory.db | Database file path | | NEUROMCP_EMBEDDING_PROVIDER | auto | auto, onnx, ollama, openai | | NEUROMCP_DEFAULT_NAMESPACE | default | Default namespace | | NEUROMCP_AUTO_CONSOLIDATE | false | Enable periodic consolidation | | NEUROMCP_TOMBSTONE_TTL_DAYS | 30 | Days before permanent sweep | | NEUROMCP_LOG_LEVEL | info | debug, info, warn, error |

Comparison

| Feature | neuromcp | @modelcontextprotocol/server-memory | mem0 | Karpathy Wiki | |---------|----------|--------------------------------------|------|---------------| | Search | Hybrid (vector + FTS + RRF) | Keyword only | Vector only | Index-based | | Wiki | Compiled Markdown + Git | None | None | Manual Markdown | | Session persistence | Crash-resilient hooks | None | None | None | | Project auto-detect | Yes (from cwd) | No | No | No | | Embeddings | Built-in ONNX (zero config) | None | External API | None | | Governance | Namespaces, trust, soft delete | None | None | None | | Storage | SQLite + Markdown | JSON file | Cloud / Postgres | Markdown | | Infrastructure | Zero | Zero | Cloud account | Zero |

License

MIT