1memory
v0.1.0
Published
A local memory layer for coding agents.
Readme
1memory
A local memory layer for coding agents.
1memory runs as an MCP server on your machine. Agents can write down what matters, recall it later, and carry useful context across sessions without a hosted service. Semantic recall uses vector search with a local ONNX embedding model (paraphrase-MiniLM-L3-v2) published on npm with the package, so it is always available offline.
Synthetic recall benchmark: ~83% recall@1
It is built for local-first agent work:
- No login
- No Docker
- No external database
- No hosted embedding API
- No background daemon
{
"mcpServers": {
"1memory": {
"command": "npx",
"args": ["-y", "1memory", "mcp"]
}
}
}The Idea
Agents are good inside one context window. They are weaker across time.
They forget what was confirmed. They forget what was ruled out. They forget the handoff from the last session. They may ask you to repeat facts the previous agent already learned.
1memory gives agents a small local memory system:
- write memories explicitly
- recall memories with citations
- start sessions with compact context
- end sessions with a handoff
- inspect stored memories when something looks wrong
The goal is not to replace files, docs, or git history. The goal is to preserve the working memory that normally disappears when the chat ends.
Memory Types
1memory stores four kinds of memory:
Facts: stable information that should be available later. Example: "The retry bug only reproduces when provider retries overlap with the replay job."
Events: things that happened at a point in time. Example: "On Monday, the investigation ruled out timezone parsing."
Instructions: guidance the agent should follow in future sessions. Example: "When touching the installer, run the MCP integration tests before reporting completion."
Tasks: open work and handoff items. Example: "Verify whether async ingest resumes after MCP restart."
This keeps memory structured enough for agents to use, but simple enough to inspect.
Why Local
Memory is only useful if you can trust where it lives.
1memory stores data locally by default:
~/.1memory/Under the hood it uses LanceDB as an embedded local store. The MCP server is started by your editor or agent client as a stdio process. There is no service to run, no account to create, and no remote database to provision.
Retrieval works locally too. Lexical, metadata, and vector search run on your machine. Vector search uses the same local ONNX embedding model (paraphrase-MiniLM-L3-v2) shipped on npm with 1memory, so semantic recall does not require a separate download or an embedding API.
Quick Start
Requires Node.js 20 or newer.
Add 1memory to an MCP client:
{
"mcpServers": {
"1memory": {
"command": "npx",
"args": ["-y", "1memory", "mcp"]
}
}
}For a terminal UI to browse and search local memories:
npx -y 1memory exploreClient Install
1memory can generate workspace-scoped MCP config for supported clients:
npx -y 1memory mcp install cursor
npx -y 1memory mcp install claude-code
npx -y 1memory mcp install claude-desktopPreview the generated files:
npx -y 1memory mcp install cursor --dry-runChoose scope:
npx -y 1memory mcp install cursor --scope=workspaceCurrent 0.1 installer support focuses on workspace config. User-scope install planning is recognized, but workspace artifacts are used today.
MCP Surface
The 0.1 server exposes the core local memory loop:
memory_capabilities
memory_health
memory_explain_setup
memory_profiles_list
memory_profile_current
memory_profile_select
memory_session_start
memory_context
memory_recall
memory_remember
memory_get
memory_list
memory_session_end
memory_ingest_statusA typical session:
- The agent starts a session and asks 1memory for compact context.
- During work, the agent recalls facts, events, instructions, or tasks.
- When something worth keeping is learned, the agent writes a memory.
- At the end, the agent records a handoff.
- A later session can pick up from the stored memory instead of starting cold.
What 0.1 Includes
1memory 0.1 is focused on making local memory useful.
Included:
- MCP stdio server
- Local profile resolution
- Local LanceDB persistence
- Explicit memory writes
- Memory get, list, and recall
- Session start and session end records
- Compact context blocks
- Lexical, metadata, and vector retrieval with the npm-shipped local embedding model
- Request IDs, warnings, and predictable response envelopes
- Startup migrations and schema tracking
- Local write mutex for safer concurrent access
- Audit events for selected reads and writes
- Cursor, Claude Code, Claude Desktop, and generic MCP config generation
Not in the 0.1 core:
- Hosted sync
- Team accounts
- Remote admin UI
doctor- export commands
- advanced correction tools such as supersede, forget, timeline, verify, and feedback
Architecture
MCP client
-> 1memory mcp
-> profile resolver
-> memory, session, and recall services
-> local LanceDBThe memory record is the source of truth. Recall results, context blocks, and future exports are derived from stored records rather than becoming separate hidden state.
Development
pnpm installBuild:
pnpm run buildTypecheck:
pnpm run typecheckRun tests:
pnpm testRun from source:
pnpm run dev:mcpBenchmark
Retrieval benchmark (synthetic corpus in benchmark/scenarios.json; requires the local embedding model, same as tests). Metrics vary by machine and load.
pnpm run benchmark:retrievalExample output:
1memory retrieval benchmark (synthetic corpus, local embeddings)
cases=6 queries=18 limit=8
recall@1 83.3%
recall@8 100.0%
MRR 0.903
mean latency 24.84 ms
- handoff-retry-overlap: @1 100% @8 100% mrr 1.00 (9 mem, 3 q)
- mcp-stdio-instruction: @1 67% @8 100% mrr 0.83 (7 mem, 3 q)
- ingest-resume-semantics: @1 67% @8 100% mrr 0.75 (8 mem, 3 q)
- prefs-among-noise: @1 100% @8 100% mrr 1.00 (8 mem, 3 q)
- timezone-ruled-out: @1 100% @8 100% mrr 1.00 (6 mem, 3 q)
- dense-shared-vocabulary: @1 67% @8 100% mrr 0.83 (8 mem, 3 q)Use pnpm run benchmark:retrieval -- --json for machine-readable results.
CI and releases
CI runs on pushes and pull requests to main. Release steps, npm OIDC trusted publishing, and tagging are documented in RELEASE.md.
License
Apache-2.0. See LICENSE.
