npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

code-brain-mcp

v2.0.1

Published

Code Brain MCP — project memory, cache, task classification, and coordinated reasoning tools for Cursor and other MCP clients

Readme

Code Brain MCP

Code Brain MCP is an MCP (Model Context Protocol) server that gives AI assistants a single place for:

  • Project memory – decisions, patterns, and risks remembered across sessions.
  • Task classification – “how deep do we need to think?” for each request.
  • Structured reasoning – decomposed, explainable thinking with branches.
  • Planning & code intent – deterministic plans and change-intent for code edits.
  • Docs lookup – optional Context7-powered documentation fetch.

Instead of wiring multiple separate MCP servers (memory, reasoning, docs, etc.), you run one server and get a coherent “brain” for your project.

If you want something added or improved in this project, you can email me at [email protected].


Core ideas

  • Project-scoped memory
    Stores and retrieves facts about your project (architecture decisions, invariants, bugs, risks, patterns) under a .brain/ directory in your repo.

  • Mode-aware thinking
    Classifies each request as:

    • fast – small fix or quick answer.
    • chunk – multi-step but bounded work (e.g. feature in one area).
    • deep – architectural or tradeoff-heavy work.

    The mode decides how many phases to run (intake, docs, planning, reasoning, validation, store).

  • Deterministic pipeline
    A LangGraph-based pipeline (run_deep_pipeline) wires:

    1. Intake – clean intent, attach files, recall memory.
    2. Planner – build a stepwise plan.
    3. Reasoning – code reasoning + change intent.
    4. Validate – uncertainty guard to decide continue / re-explore / abort.
    5. Store – write a summarized decision back to memory.
  • Skills
    Preset “modes” like debug-crash, add-feature, refactor, write-tests, etc. These bias intake, routing, and memory queries for that kind of task.


What this MCP actually does

1. Project detection & storage layout

When started in a directory, Code Brain MCP:

  • Treats the current working directory (or CODE_BRAIN_PROJECT_ROOT if set) as the project root.

  • Creates a .brain/ directory there, which can contain:

    • memory.db – SQLite database (preferred).
    • memory.md – markdown fallback store.
    • audit.log – append-only log of key tool calls (e.g. memory mutations, pipeline runs).

Memory is always scoped per project so multiple repos get independent brains.

2. Memory model (v2)

Memory tools:

  • memory_retrieve
  • memory_store
  • memory_update
  • memory_delete
  • memory_list
  • memory_history_list

Key behavior:

  • Remembers structured values (JSON-like objects) keyed by a namespace and string key.

  • Every stored item comes back with:

    • score – relevance to the query (with synonym-aware matching).
    • stored_at – ISO timestamp.
    • isStale – whether it looks old.
    • taskType – optional tag like feature, bug_fix, refactor, etc.
    • files – related file paths.
  • Updates are additive:

    • memory_update applies a shallow patch to the stored value and pushes the previous value into history.
    • memory_delete defaults to soft delete (can optionally hard delete).
  • History:

    • Both DB and file backends keep up to 5 history entries per key.
    • memory_history_list returns the most recent versions with timestamps.
  • Safety:

    • Obvious secrets and large code blobs are sanitized/summarized before storage (e.g. private keys → [redacted], huge code → “Large snippet omitted” with preview).

3. Intake: neural_sync

neural_sync is the “front door” for real work. It:

  1. Sanitizes the user message (removes common injection markers & zero-width chars).

  2. Detects task type (feature, bug fix, refactor, explain, test, API integration).

  3. Classifies thinking mode (fast / chunk / deep) and computes phase sequence.

  4. Retrieves memory multiple times:

    • Base query.
    • “recent changes” variant.
    • “bug history” variant.
  5. Scans project files:

    • Walks text files under the project root.
    • Excludes obvious build and cache directories (e.g. node_modules, dist, .git, .brain, .cursor, coverage, .next, build, out, __pycache__, etc.).
    • Scores files by keyword overlap, imports, recency, and whether they are tests.
    • Attaches a few high-scoring snippets and their companion tests.
  6. Detects libraries:

    • From imports in attached files.
    • From package.json dependencies mentioned in the request.
  7. Fetches docs (optional, via Context7 – see below).

  8. Computes ambiguity and clarifying questions (e.g. “Which file should this target first?”).

  9. Builds a routing plan – recommended next tools & phases.

It returns a rich object (NeuralSyncOutput) with:

  • cleanIntent, taskType, mode, phases
  • attachedFiles
  • regressionContext (existing implementation summary, recent changes, regression risks)
  • libraryDocs (sanitized doc text snippets)
  • memoryContext
  • routingPlan, needsClarification, clarifyingQuestions
  • detectedLibraries, docsSkipped
  • projectRoot, namespace, sessionId

4. Planner & code intent

Two main tools:

  • agent_plan – builds a deterministic plan from NeuralSyncOutput.
  • agent_code – turns the plan + attached files + docs into structured “intent to change code”.

They produce:

  • A sequence of plan steps (locate, understand, design, implement, verify, document).

  • An ordered list of file-level changes:

    • Which file.
    • Where to target (approximate line).
    • Before/after description (not raw diff).
    • Explanation of why.
  • Suggested tests to run and verification steps.

  • A rollback plan tied to memory history and git reverts.

These tools don’t edit files; they output a machine-readable specification that a higher-level agent (or a human) can apply.

5. Structured thinking & branching

For reasoning, Code Brain MCP exposes:

  • thinking_decompose – five-stage decomposition:

    • problem_definition
    • constraints
    • model
    • proof
    • implementation
  • capture_thought / get_thinking_summary / clear_thinking_history – general-purpose thought capture with stages, scores, and summaries.

  • steps_append / steps_summary / steps_clear – linear step-by-step thinking, with revisions and branches.

  • Branch tools:

    • branch_create, branch_switch, branch_think, branch_merge, branch_close, branch_tag, branch_export.

Everything is keyed by project root and optional sessionId, so you can keep parallel branches of reasoning for different tasks in the same repo.

6. Uncertainty guard

uncertainty_guard is a small but important piece:

  • Input: conclusion text, confidence, current uncertainty, exploration loops, exploration summary, last step.

  • Output:

    • verdict and action: continue, re_explore, or abort.
    • Updated currentUncertainty, confidence, explorationLoops, thresholds, and optional summaries.

Typical usage:

  • After reasoning, call uncertainty_guard.
  • If it says re_explore, loop planner/reasoning again (up to a max number of loops).
  • If it says abort, don’t store the result in memory.
  • If it says continue, proceed to memory_store.

Tools overview (high level)

Some of the most important tools:

  • Project & health

    • get_project_id – find project root, project name, .brain path.
    • get_health – check that project root and memory store are ready; optionally check docs reachability.
  • Intake & orchestration

    • neural_sync – v2 intake pipeline (see above).
    • start_task – session management, optionally runs intake and returns syncContext, mode, phases, routing hints.
    • run_deep_pipeline – in-process deep pipeline over LangGraph (intake → planner → reasoning → validate → store), with timeout and partial state.
  • Memory

    • memory_retrieve, memory_store, memory_update, memory_delete, memory_list, memory_history_list.
  • Reasoning & planning

    • thinking_decompose, steps_*, capture_thought, get_thinking_summary, branch_*, code_steps, uncertainty_guard, agent_plan, agent_code.
  • Skills & docs

    • skill_list, skill_load – discover and load skills like debug-crash, add-feature, refactor, write-tests, etc.
    • docs_resolve_id, docs_query – Context7-backed library docs lookup.

Installation & running (generic MCP host)

You can either install globally or run via npx.

1. Install

Using npx (no global install):

npx -y code-brain-mcp

Or install globally:

npm install -g code-brain-mcp
code-brain-mcp

The server will start over stdio and wait for MCP client requests.

2. MCP server config (generic)

In your MCP host’s config (exact file depends on the host), register the server something like:

{
  "mcpServers": {
    "code-brain": {
      "command": "npx",
      "args": ["-y", "code-brain-mcp"]
    }
  }
}

The host is responsible for:

  • Starting the process with the right working directory (your project root).
  • Speaking MCP over stdio.

Context7 integration (docs lookup)

If you want Code Brain MCP to fetch real library docs, you need a Context7 API key.

  1. Get an API key from Context7 (e.g. from their dashboard).

  2. Set CONTEXT7_API_KEY in the environment where Code Brain MCP runs, for example:

    export CONTEXT7_API_KEY="your-context7-api-key"
    npx -y code-brain-mcp
  3. When this is set:

    • docs_resolve_id and docs_query will call the Context7 API.
    • neural_sync will auto-detect libraries and prefetch docs into libraryDocs.
    • All fetched text is passed through the same sanitizer used for user input before being returned.

If CONTEXT7_API_KEY is not set:

  • Docs tools return an error with a setup hint.
  • get_health reports docs as “not configured”, but project + memory can still be OK.
  • neural_sync will skip docs fetch and continue using local context only.

Development

From the project root:

# Build TypeScript to dist/
npm run build

# Run tests (builds MCP server for tests, then runs integration + unit tests)
npm test

Contact

If you want something added, changed, or debugged in this MCP, you can email:

[email protected]