codex-mcp-memory-server
v0.4.1
Published
Symbol-aware MCP memory server for Codex and coding agents
Maintainers
Readme

Codex MCP Memory Server
Symbol-aware MCP memory server for Codex and coding agents.
It indexes TypeScript, TSX, JavaScript, JSX, and Python projects with tree-sitter, stores symbol metadata in SQLite, and exposes compact MCP tools for low-token project discovery. The current implementation is TS/JS-first for caller precision, including imports, barrel re-exports, selective TypeScript compiler API symbol resolution, simple instance method calls, and TSX/JSX component usage. Python supports symbol discovery, same-file calls, relative/module import calls, package __init__.py re-exports, self.method() calls, and simple constructor-assigned instance method calls.
Why
Agents often spend a lot of tokens finding the right file or function before reading the code that matters. This server makes the first pass cheaper:
- Search compact symbol metadata.
- Pick the relevant symbol by
ref, file, and line range. - Read the full symbol body only when needed.
- Save durable messages and decisions for future agents.
Measured Token Savings

Current benchmark task: find the callTool symbol in this repository.
classic_tokens=4236
mcp_tokens=45
savings=98.9%
smaller_output=94.1xToken counts are practical estimates based on characters / 4; the important point is the relative size difference during the discovery phase.
See docs/benchmarks.md for benchmark scope and output files.
The benchmark suite also includes real task-shaped checks such as bug-fix root symbol selection, refactor impact analysis, regression narrowing, PR risk summaries, noisy bug investigation narrowing, AST caller precision, TSX component usage, incremental Git reindexing, language-depth coverage, synthetic 10k-symbol scale smoke, and synthetic monorepo workspace scale smoke.
Quick Start
Recommended setup helper:
npx -y -p codex-mcp-memory-server setup-codex-mcp-memory `
--project-path "C:\path\to\your\repo" `
--project-id "your-project-id" `
--verifyRemove --verify to register the server after the checks pass.
codex mcp add codex-mcp-memory-server `
--env PROJECT_PATH="C:\path\to\your\repo" `
--env PROJECT_ID="your-project-id" `
--env MCP_MEMORY_DB_PATH="C:\Users\you\.mcp-memory-server\memory.db" `
-- npx -y codex-mcp-memory-serverMinimal form:
codex mcp add codex-mcp-memory-server -- npx -y codex-mcp-memory-serverSee docs/quickstart.md for NPX usage, environment variables, and verification.
Tools
Discovery tools return compact results by default.
Core tools:
index_statussearch_symbolslookup_symbolget_symbol_bodyfind_callersreindex_changed_filesreconcile_indexchanged_symbols_risksave_messagesearch_historysave_decisionget_decisions
See docs/tools.md for the full tool list.
Recommended Agent Flow
- Start with
index_status,search_symbols,lookup_symbol,search_history, orget_decisions. - Use compact output to identify a symbol, file, and line range.
- Call
get_symbol_bodyonly for selected symbols. - Use shell search/read commands for docs, config, CSS, JSON, fixtures, and broad non-symbol searches.
- Save important project decisions with
save_decision.
See docs/agent-flows.md and AGENTS.md for task-specific flows.
Documentation
- Quickstart
- Tools
- Benchmarks
- Agent Flows
- Architecture
- Troubleshooting
- Demo Transcript
- Plugin Polish
- Roadmap
- Dogfooding Report
- Release Checklist
Local Development
npm install
npm test
npm run smoke:npx
npm run buildRun from source:
$env:PROJECT_PATH="C:\path\to\your\repo"
$env:PROJECT_ID="your-project-id"
npm startRun benchmarks:
npm run benchmarkPublishing
npm test
npm pack --dry-run
npm publish --access publicprepack builds dist/src, and prepublishOnly runs the full test suite.
Notes
- This is a symbol memory/indexing server, not a replacement for source inspection.
- Compact outputs intentionally omit full code bodies to reduce token use during discovery.
- Full source remains available through
get_symbol_body. find_callersreturns AST definite callers and fuzzy probable callers.- v0.3 development is focused on real-repository validation: stronger edge-case tests, task-shaped benchmarks, and dogfooding before adding broad new feature surfaces.
