npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

cortex-mcp

v2.5.0

Published

Persistent memory for AI coding assistants. Injects context from past sessions into every LLM request.

Downloads

2,580

Readme

Cortex MCP — Persistent AI Memory

npm version npm downloads CI License: MIT Node.js

Give your AI coding assistant a brain that remembers across sessions.

Cortex is an MCP (Model Context Protocol) server that provides persistent, intelligent memory to any AI coding assistant — Cursor, Claude Code, Windsurf, Cline, or any MCP-compatible tool.

What's New in v2.4

  • 🧠 Smart importance scoring — Memories ranked by type (CORRECTION > INSIGHT) + content signals (file paths, error keywords boost importance)
  • 🏷️ Auto topic tags — Memories auto-tagged with technologies, domains, and actions (['typescript', 'auth', 'database'])
  • 🏁 Task completion tracking — Say "finished SEO work" and old SEO memories get demoted automatically
  • 🔍 Resolved memory filtering — FTS search, ranker, and context builder all exclude completed work
  • 🛡️ Code block safety — Auto-learn no longer extracts from code blocks (no const foo = bar() as memories)
  • 📊 Dashboard auto-refresh — Updates every 30 seconds
  • Free LLM integration — Add OPENROUTER_API_KEY for 3x smarter memory extraction (zero cost!)
  • 40+ auto-learn patterns — Captures decisions, corrections, conventions, bug fixes from AI responses
  • Error fingerprints — Auto-captures stack traces, TS errors, npm failures for instant fix recall
  • Success tracking — Detects "that worked!" and stores proven approaches
  • Brain Health Score — Gamified 0-100 score with grades (Newborn → Genius)
  • Git-enhanced resume — Shows recent commits + branch context when resuming work

See CHANGELOG.md for full details.

The Problem

Every time you start a new conversation, your AI assistant forgets everything:

  • Coding conventions you already explained
  • Bugs you already fixed
  • Decisions you already made
  • What files you were working on

Cortex solves this. It stores, ranks, and proactively recalls context so your AI never starts from zero.

Quick Start

Option 1: npx (recommended — always up to date)

Add to your MCP config (no install needed, auto-updates on every launch):

{
    "mcpServers": {
        "cortex": {
            "command": "npx",
            "args": ["-y", "cortex-mcp@latest"],
            "transportType": "stdio"
        }
    }
}

🧠 Supercharge with Free LLM (Optional)

Add a free OpenRouter API key to make Cortex 3x smarter at extracting knowledge:

{
    "mcpServers": {
        "cortex": {
            "command": "npx",
            "args": ["-y", "cortex-mcp@latest"],
            "transportType": "stdio",
            "env": {
                "OPENROUTER_API_KEY": "sk-or-v1-your-key-here"
            }
        }
    }
}

How to get a free key:

  1. Go to openrouter.ai/keys
  2. Create a free account
  3. Generate an API key
  4. Paste it in the config above

What it enables: When regex can't extract patterns from your conversation, Cortex uses a free LLM (Llama 4 Scout) to catch implicit decisions, preferences, and architecture knowledge that keywords miss. Without it, ~70% of conversations produce no memories. With it, that drops to ~20%.

Option 2: Global install

npm install -g cortex-mcp

Then add to your MCP config:

{
    "mcpServers": {
        "cortex": {
            "command": "cortex-mcp",
            "transportType": "stdio"
        }
    }
}

Option 2: Clone from source

git clone https://github.com/jaswanthkumarj1234-beep/cortex-mcp.git
cd cognitive-memory
npm install
npm run build

Then add to your MCP config:

{
    "mcpServers": {
        "cortex": {
            "command": "node",
            "args": ["<path-to>/cognitive-memory/dist/mcp-stdio.js"],
            "transportType": "stdio"
        }
    }
}

Restart your IDE and Cortex is active.

Try the Demo

See Cortex in action without any AI client:

npm run demo          # Interactive demo — store, search, dedup
npm run benchmark     # Performance benchmark — write, read, FTS ops/sec

IDE-Specific Setup Guides

Step-by-step instructions for your IDE: Cursor · Claude Code · Windsurf · Cline · Copilot

Recipes & Use Cases

10 practical examples: docs/recipes.md — conventions, bug tracking, anti-hallucination, team onboarding, and more.

Features

Why Cortex?

| Feature | Cortex | Other MCP memory servers | | --------------------------- | :------------------------------------------------------: | :----------------------: | | Retrieval | Hybrid (FTS + Vector + Graph) | Usually FTS only | | Auto-learning | Extracts memories from AI responses | Manual store only | | Hallucination detection | verify_code + verify_files | No | | Git integration | Auto-captures commits, diffs, branch context | No | | Cognitive features | Decay, attention, contradiction detection, consolidation | No | | Brain pipeline | 12+ layer context injection | Simple key-value recall | | Setup | npm i -g cortex-mcp + 1 config line | Varies | | Offline | 100% local SQLite (no API key needed) | Often requires API |

20 MCP Tools

| Tool | Purpose | | ----------------- | ---------------------------------------------------------------------------------- | | force_recall | Full brain dump at conversation start (12+ layer pipeline) | | recall_memory | Search memories by topic (FTS + vector + graph) | | store_memory | Store a decision, correction, convention, or bug fix | | quick_store | One-liner memory storage with auto-classification | | auto_learn | Extract memories from AI responses automatically | | scan_project | Scan project structure, stack, git, exports, architecture | | verify_code | Check if imports/exports/env vars actually exist | | verify_files | Check if file paths are real or hallucinated | | get_context | Get compressed context for current file | | review_code | Review code against stored conventions and past bug patterns | | pre_check | Pre-flight check before editing — get all conventions and past bugs for a file | | check_impact | Impact analysis — check which files depend on the file you plan to modify | | resume_work | Resume after a break — get last session summary, recent corrections, pending tasks | | get_stats | Memory database statistics | | list_memories | List all active memories | | update_memory | Update an existing memory | | delete_memory | Delete a memory | | export_memories | Export all memories to JSON | | import_memories | Import memories from JSON | | health_check | Server health check |

12+ Layer Brain Pipeline

Every conversation starts with force_recall, which runs 12+ layers (layers 6-12 run in parallel for speed):

| Layer | Feature | Parallel? | | :---: | ------------------------------------------------------------- | :-------: | | 0 | Session management — track what you're working on | — | | 1 | Maintenance — decay, boost corrections, consolidate | — | | 2 | Attention focus — detect debugging vs coding vs reviewing | — | | 3 | Session continuity — resume where you left off | — | | 4 | Hot corrections — frequently-corrected topics | — | | 5 | Core context — corrections, decisions, conventions, bug fixes | — | | 6 | Anticipation — proactive recall based on current file | ✅ | | 7 | Temporal — what changed today, yesterday, this week | ✅ | | 8 | Workspace git — branch, recent commits, diffs | ✅ | | 8.5 | Git memory — auto-capture commits + file changes | ✅ | | 9 | Topic search — FTS + decay + causal chain traversal | ✅ | | 10 | Knowledge gaps — flag files with zero memories | ✅ | | 11 | Export map — all functions/classes (anti-hallucination) | ✅ | | 12 | Architecture graph — layers, circular deps, API endpoints | ✅ |

Cognitive Features

  • Confidence decay — old unused memories fade, frequently accessed ones strengthen
  • Attention ranking — debugging context boosts bug-fix memories, coding boosts conventions
  • Contradiction detection — new memories automatically supersede conflicting old ones
  • Memory consolidation — similar memories merge into higher-level insights
  • Cross-session threading — related sessions link by topic overlap
  • Learning rate — topics corrected 3+ times get CRITICAL priority

Performance & Reliability

  • force_recall latency: ~3-5s first call, instant on cache hit (v1.2 — was 15-19s)
  • recall_memory latency: <1ms average (local SQLite + WAL)
  • Throughput: 1800+ ops/sec (verified via Soak Test)
  • Stability: Zero memory leaks over sustained operation
  • Protocol: Full JSON-RPC 2.0 compliance (passed E2E suite)
  • CI: Tested on Node 18, 20, 22 with lint + build + E2E on every push

Anti-Hallucination

  • Deep export map scans every source file for all exported functions, classes, types
  • When AI references a function that doesn't exist, verify_code suggests the closest real match
  • Architecture graph shows actual project layers and dependencies

Optional: LLM Enhancement

Cortex works fully without any API key. Optionally, add an LLM key for smarter memory extraction:

# Option 1: OpenAI
set OPENAI_API_KEY=sk-your-key

# Option 2: Anthropic
set ANTHROPIC_API_KEY=sk-ant-your-key

# Option 3: Local (Ollama, free)
set CORTEX_LLM_KEY=ollama
set CORTEX_LLM_BASE_URL=http://localhost:11434

Memory Types

| Type | Use For | | ------------------- | --------------------------------------------------- | | CORRECTION | "Don't use var, use const" | | DECISION | "We chose PostgreSQL over MongoDB" | | CONVENTION | "All API routes start with /api/v1" | | BUG_FIX | "Fixed race condition in auth middleware" | | INSIGHT | "The codebase uses a layered architecture" | | FAILED_SUGGESTION | "Tried Redis caching, too complex for this project" | | PROVEN_PATTERN | "useDebounce hook works well for search inputs" |

Architecture

src/
├── server/          # MCP handler (12+ layer brain pipeline) + dashboard
├── memory/          # 17 cognitive modules (decay, attention, anticipation, etc.)
├── scanners/        # Project scanner, code verifier, export map, architecture graph
├── db/              # SQLite storage, event log
├── security/        # Rate limiter, encryption
├── hooks/           # Git capture
├── config/          # Configuration
└── cli/             # Setup wizard

Pricing & Features

Cortex is open core. The basic version is free forever. To unlock deep cognitive features, upgrade to PRO.

| Feature | FREE (npm install) | PRO ($9/mo) | | ----------------------------------- | :----------------: | :---------: | | Memory Capacity | ∞ Unlimited | ∞ Unlimited | | Brain Layers | 12+ (Full) | 12+ (Full) | | All 16 Tools | Yes | Yes | | Auto-Learn | Yes | Yes | | Export Map (Anti-Hallucination) | Yes | Yes | | Architecture Graph | Yes | Yes | | Git Memory | Yes | Yes | | Confidence Decay | Yes | Yes | | Contradiction Detection | Yes | Yes | | Priority Support | — | Yes |

Launch Edition: All features are currently free. Get started now and lock in lifetime access.

Activate PRO

  1. Get a license key from your Cortex AI Dashboard

  2. Set it in your environment:

    # Option A: Environment variable
    export CORTEX_LICENSE_KEY=CORTEX-XXXX-XXXX-XXXX-XXXX
    
    # Option B: License file
    echo CORTEX-XXXX-XXXX-XXXX-XXXX > ~/.cortex/license
  3. Restart your IDE / MCP server.

Architecture

graph TB
    AI["AI Client (Cursor, Claude, etc.)"]
    MCP["MCP Server (stdio)"]
    ML["Memory Layer"]
    DB["SQLite Database"]
    SEC["Security Layer"]
    API["License API"]

    AI -->|"JSON-RPC via stdio"| MCP
    MCP --> ML
    ML --> DB
    MCP --> SEC
    SEC -->|"verify license"| API

    subgraph "Memory Layer"
        ML --> EMB["Embedding Manager"]
        ML --> AL["Auto-Learner"]
        ML --> QG["Quality Gates"]
        ML --> TD["Temporal Decay"]
    end

    subgraph "Retrieval"
        ML --> HR["Hybrid Retriever"]
        HR --> VS["Vector Search"]
        HR --> FTS["Full-Text Search"]
        HR --> RR["Recency Ranker"]
    end

FAQ / Troubleshooting

  1. Make sure Node.js >=18 is installed: node --version
  2. Try running manually: npx cortex-mcp — check stderr for errors
  3. Check if another Cortex instance is running on the same workspace
  4. Delete the cache and restart: rm -rf .ai/brain-data
  1. Confirm your AI client's system prompt includes the Cortex rules (run cortex-setup to install them)
  2. Check the dashboard at http://localhost:3456 to see stored memories
  3. Verify memories exist: the AI should call force_recall at conversation start

better-sqlite3 requires a C++ compiler:

  • Windows: Install Visual Studio Build Tools
  • macOS: Run xcode-select --install
  • Linux: Run sudo apt-get install build-essential python3
  1. Verify the key format: CORTEX-XXXX-XXXX-XXXX-XXXX
  2. Check your internet connection (license is verified online on first use)
  3. Clear the cache: rm ~/.cortex/license-cache.json
  4. Try setting it via environment variable: export CORTEX_LICENSE_KEY=CORTEX-...
  1. Default port is 3456. Check if something else is using it: lsof -i :3456
  2. Set a custom port: export CORTEX_PORT=4000
  3. The dashboard URL is shown in your AI client's stderr on startup

Contributing

See CONTRIBUTING.md for development setup, coding conventions, and the PR process.

License

MIT