code-brain-mcp
v2.0.1
Published
Code Brain MCP — project memory, cache, task classification, and coordinated reasoning tools for Cursor and other MCP clients
Maintainers
Readme
Code Brain MCP
Code Brain MCP is an MCP (Model Context Protocol) server that gives AI assistants a single place for:
- Project memory – decisions, patterns, and risks remembered across sessions.
- Task classification – “how deep do we need to think?” for each request.
- Structured reasoning – decomposed, explainable thinking with branches.
- Planning & code intent – deterministic plans and change-intent for code edits.
- Docs lookup – optional Context7-powered documentation fetch.
Instead of wiring multiple separate MCP servers (memory, reasoning, docs, etc.), you run one server and get a coherent “brain” for your project.
If you want something added or improved in this project, you can email me at [email protected].
Core ideas
Project-scoped memory
Stores and retrieves facts about your project (architecture decisions, invariants, bugs, risks, patterns) under a.brain/directory in your repo.Mode-aware thinking
Classifies each request as:fast– small fix or quick answer.chunk– multi-step but bounded work (e.g. feature in one area).deep– architectural or tradeoff-heavy work.
The mode decides how many phases to run (intake, docs, planning, reasoning, validation, store).
Deterministic pipeline
A LangGraph-based pipeline (run_deep_pipeline) wires:- Intake – clean intent, attach files, recall memory.
- Planner – build a stepwise plan.
- Reasoning – code reasoning + change intent.
- Validate – uncertainty guard to decide continue / re-explore / abort.
- Store – write a summarized decision back to memory.
Skills
Preset “modes” likedebug-crash,add-feature,refactor,write-tests, etc. These bias intake, routing, and memory queries for that kind of task.
What this MCP actually does
1. Project detection & storage layout
When started in a directory, Code Brain MCP:
Treats the current working directory (or
CODE_BRAIN_PROJECT_ROOTif set) as the project root.Creates a
.brain/directory there, which can contain:memory.db– SQLite database (preferred).memory.md– markdown fallback store.audit.log– append-only log of key tool calls (e.g. memory mutations, pipeline runs).
Memory is always scoped per project so multiple repos get independent brains.
2. Memory model (v2)
Memory tools:
memory_retrievememory_storememory_updatememory_deletememory_listmemory_history_list
Key behavior:
Remembers structured values (JSON-like objects) keyed by a namespace and string key.
Every stored item comes back with:
score– relevance to the query (with synonym-aware matching).stored_at– ISO timestamp.isStale– whether it looks old.taskType– optional tag likefeature,bug_fix,refactor, etc.files– related file paths.
Updates are additive:
memory_updateapplies a shallow patch to the stored value and pushes the previous value into history.memory_deletedefaults to soft delete (can optionally hard delete).
History:
- Both DB and file backends keep up to 5 history entries per key.
memory_history_listreturns the most recent versions with timestamps.
Safety:
- Obvious secrets and large code blobs are sanitized/summarized before storage (e.g. private keys →
[redacted], huge code → “Large snippet omitted” with preview).
- Obvious secrets and large code blobs are sanitized/summarized before storage (e.g. private keys →
3. Intake: neural_sync
neural_sync is the “front door” for real work. It:
Sanitizes the user message (removes common injection markers & zero-width chars).
Detects task type (feature, bug fix, refactor, explain, test, API integration).
Classifies thinking mode (
fast/chunk/deep) and computes phase sequence.Retrieves memory multiple times:
- Base query.
- “recent changes” variant.
- “bug history” variant.
Scans project files:
- Walks text files under the project root.
- Excludes obvious build and cache directories (e.g.
node_modules,dist,.git,.brain,.cursor,coverage,.next,build,out,__pycache__, etc.). - Scores files by keyword overlap, imports, recency, and whether they are tests.
- Attaches a few high-scoring snippets and their companion tests.
Detects libraries:
- From imports in attached files.
- From
package.jsondependencies mentioned in the request.
Fetches docs (optional, via Context7 – see below).
Computes ambiguity and clarifying questions (e.g. “Which file should this target first?”).
Builds a routing plan – recommended next tools & phases.
It returns a rich object (NeuralSyncOutput) with:
cleanIntent,taskType,mode,phasesattachedFilesregressionContext(existing implementation summary, recent changes, regression risks)libraryDocs(sanitized doc text snippets)memoryContextroutingPlan,needsClarification,clarifyingQuestionsdetectedLibraries,docsSkippedprojectRoot,namespace,sessionId
4. Planner & code intent
Two main tools:
agent_plan– builds a deterministic plan fromNeuralSyncOutput.agent_code– turns the plan + attached files + docs into structured “intent to change code”.
They produce:
A sequence of plan steps (locate, understand, design, implement, verify, document).
An ordered list of file-level changes:
- Which file.
- Where to target (approximate line).
- Before/after description (not raw diff).
- Explanation of why.
Suggested tests to run and verification steps.
A rollback plan tied to memory history and git reverts.
These tools don’t edit files; they output a machine-readable specification that a higher-level agent (or a human) can apply.
5. Structured thinking & branching
For reasoning, Code Brain MCP exposes:
thinking_decompose– five-stage decomposition:problem_definitionconstraintsmodelproofimplementation
capture_thought/get_thinking_summary/clear_thinking_history– general-purpose thought capture with stages, scores, and summaries.steps_append/steps_summary/steps_clear– linear step-by-step thinking, with revisions and branches.Branch tools:
branch_create,branch_switch,branch_think,branch_merge,branch_close,branch_tag,branch_export.
Everything is keyed by project root and optional sessionId, so you can keep parallel branches of reasoning for different tasks in the same repo.
6. Uncertainty guard
uncertainty_guard is a small but important piece:
Input: conclusion text, confidence, current uncertainty, exploration loops, exploration summary, last step.
Output:
verdictandaction:continue,re_explore, orabort.- Updated
currentUncertainty,confidence,explorationLoops, thresholds, and optional summaries.
Typical usage:
- After reasoning, call
uncertainty_guard. - If it says
re_explore, loop planner/reasoning again (up to a max number of loops). - If it says
abort, don’t store the result in memory. - If it says
continue, proceed tomemory_store.
Tools overview (high level)
Some of the most important tools:
Project & health
get_project_id– find project root, project name,.brainpath.get_health– check that project root and memory store are ready; optionally check docs reachability.
Intake & orchestration
neural_sync– v2 intake pipeline (see above).start_task– session management, optionally runs intake and returnssyncContext, mode, phases, routing hints.run_deep_pipeline– in-process deep pipeline over LangGraph (intake → planner → reasoning → validate → store), with timeout and partial state.
Memory
memory_retrieve,memory_store,memory_update,memory_delete,memory_list,memory_history_list.
Reasoning & planning
thinking_decompose,steps_*,capture_thought,get_thinking_summary,branch_*,code_steps,uncertainty_guard,agent_plan,agent_code.
Skills & docs
skill_list,skill_load– discover and load skills likedebug-crash,add-feature,refactor,write-tests, etc.docs_resolve_id,docs_query– Context7-backed library docs lookup.
Installation & running (generic MCP host)
You can either install globally or run via npx.
1. Install
Using npx (no global install):
npx -y code-brain-mcpOr install globally:
npm install -g code-brain-mcp
code-brain-mcpThe server will start over stdio and wait for MCP client requests.
2. MCP server config (generic)
In your MCP host’s config (exact file depends on the host), register the server something like:
{
"mcpServers": {
"code-brain": {
"command": "npx",
"args": ["-y", "code-brain-mcp"]
}
}
}The host is responsible for:
- Starting the process with the right working directory (your project root).
- Speaking MCP over stdio.
Context7 integration (docs lookup)
If you want Code Brain MCP to fetch real library docs, you need a Context7 API key.
Get an API key from Context7 (e.g. from their dashboard).
Set
CONTEXT7_API_KEYin the environment where Code Brain MCP runs, for example:export CONTEXT7_API_KEY="your-context7-api-key" npx -y code-brain-mcpWhen this is set:
docs_resolve_idanddocs_querywill call the Context7 API.neural_syncwill auto-detect libraries and prefetch docs intolibraryDocs.- All fetched text is passed through the same sanitizer used for user input before being returned.
If CONTEXT7_API_KEY is not set:
- Docs tools return an error with a setup hint.
get_healthreports docs as “not configured”, but project + memory can still be OK.neural_syncwill skip docs fetch and continue using local context only.
Development
From the project root:
# Build TypeScript to dist/
npm run build
# Run tests (builds MCP server for tests, then runs integration + unit tests)
npm testContact
If you want something added, changed, or debugged in this MCP, you can email:
