mixgram
v1.3.2
Published
MCP memory server with Markdown source of truth and SQLite FTS5.
Maintainers
Readme
Mixgram
MCP memory server compatible with Engram: Markdown as source of truth, SQLite FTS5 for full-text search. Optional local embeddings when you need hybrid search.
- Human-readable memory — All durable memory is stored as Markdown files.
- Git-friendly — Edit files by hand and reindex; the index is derived and rebuildable.
- Engram-compatible — Same MCP tool names and semantics; drop-in replacement.
Advantages over Engram
- Visible, editable docs — As you advance with agentic development you see the actual documentation files on disk. You can open them, read them, and edit them manually; the agent’s memory is not hidden in a black box.
- Versioned — Memory lives in Markdown under your repo (e.g.
docs/<type>/by default for project scope) or in a shared home (e.g.home/memory/for cross-project). Commit, branch, and diff as with any other docs. - No semantic deps required — Core experience is text-only (FTS5). No need to install embeddings or vector libs unless you opt in.
Installation
Global install (recommended for MCP clients):
npm install -g mixgramThen use the mixgram command in your client config (see below). No extra dependencies for core (text-only) memory; embeddings are optional.
CLI
| Command | Description |
|--------|-------------|
| mixgram mcp [options] | Run the MCP server (stdio). This is what your client runs. |
| mixgram setup cursor | Add Mixgram to Cursor’s MCP config. |
| mixgram setup gemini-cli | Add Mixgram to Gemini CLI settings. |
| mixgram setup codex | Add Mixgram to Codex config. |
| mixgram help | Show usage. |
| mixgram help <tool> | Show options for a specific MCP tool. |
| mixgram <tool> [--key value ...] | Run any MCP tool from the command line (see below). |
MCP tools from the CLI — Every MCP tool is automatically available as a subcommand. No extra wiring is needed: if a tool is added to the internal registry, it is exposed both to MCP clients and to the CLI. Use mixgram help to list tools and mixgram help <tool> (or mixgram <tool> --help) to see a tool’s options.
Command-line examples
# Stats and health
mixgram mem_stats
mixgram mem_stats --embeddings # enable embeddings for this run (or use env/config, see below)
# Save a note
mixgram mem_save --title "My note" --content "Search" --type decision --project my-app
mixgram mem_save --title "Other note" --content "Model Context Protocol" --type architecture
# Search (default: merged scope, limit 10)
mixgram mem_search --query "index"
mixgram mem_search --query "index" --limit 5
mixgram mem_search --query "indexing" --scope-mode project-only --project my-app
# OR-style query: use FTS5 keyword OR (and optional quoted phrases)
mixgram mem_search --query 'mcp OR "model context protocol"' --limit 10
mixgram mem_search --query 'cursor OR "model context protocol"' --limit 5
# Reindex from disk (e.g. after editing Markdown by hand)
mixgram mem_reindex --full
# Context (recent or by query)
mixgram mem_context --query "indexing decisions" --project my-app --limit 5
mixgram mem_context --limit 5Example: index and find by embedding (not FTS) — Save a note with one wording, then search with different words that mean the same. FTS finds nothing; semantic search returns the doc:
# 1. Index a document (content about storing memory)
mixgram mem_save --title "Where we store memory" --content "We keep agent memory in Markdown under docs/." --type decision --project demo --embeddings
# 2. Search by meaning: query has no literal overlap with the text
mixgram mem_search --query "where is the agent memory stored" --project demo --embeddings --limit 5The first run embeds the document (worker runs in-process and processes the queue after save; the first time can take a few seconds while the embedding model loads). The second run uses the embedding worker to turn the query into a vector and returns the document by similarity. You can also use npm run example:embedding for a self-contained demo in a temp dir.
Query syntax (FTS5): Search uses SQLite FTS5. You can use the operators OR, AND, NOT and quoted phrases. For an OR-style search (e.g. “mcp” or “model context protocol”), use the word OR in the query, not the pipe character: --query 'mcp OR "model context protocol"'. Unquoted terms are ANDed by default; use OR explicitly to match any of several terms.
Arguments are passed as --key value. Booleans use --flag or --no-flag. Arrays use repeated flags: --tags a --tags b.
Enabling embeddings from the CLI: For the MCP server use mixgram mcp --embeddings. For one-off tool runs (e.g. mixgram mem_stats, mixgram mem_search) you can pass --embeddings after the tool name, set the env var MIXGRAM_EMBEDDINGS_ENABLED=1 (or true), or enable in .mixgram/config.json with "embeddings": { "enabled": true }. Example: mixgram mem_search --query "something" --embeddings --limit 5.
mcp options (and env / config file):
| Option | Env | Description |
|--------|-----|-------------|
| --config <path> | MIXGRAM_CONFIG | Config file (default: ./.mixgram/config.json or ~/.mixgram/config.json). |
| --embeddings | MIXGRAM_EMBEDDINGS_ENABLED | Enable optional semantic (hybrid) search. |
| --watch | MIXGRAM_WATCH | Watch files and reindex on change. |
| --home <path> | MIXGRAM_HOME | Home memory root (cross-project). |
| --project-memory <path> | MIXGRAM_PROJECT_MEMORY | Project memory root (default: ./docs, relative to repo). |
| --projects <path> | MIXGRAM_PROJECTS | Projects root (legacy). |
| --sqlite-path <path> | MIXGRAM_SQLITE_PATH | SQLite index path. |
Example — Cursor
Create .cursor/mcp.json in your repo (recommended, and required for Cursor Cloud Agents) or add the same config to ~/.cursor/mcp.json (or run mixgram setup cursor):
{
"mcpServers": {
"mixgram": {
"command": "mixgram",
"args": ["mcp", "--project-memory", "${workspaceFolder}/docs"]
}
}
}With embeddings and custom paths:
"mixgram": {
"command": "mixgram",
"args": ["mcp", "--project-memory", "${workspaceFolder}/docs", "--embeddings", "--home", "/data/memory"]
}Restart Cursor after changing config.
Quick start
Run the MCP server (stdio):
mixgram mcpFrom the repo without global install: npx mixgram mcp or npm start. Configure your MCP client with command: "mixgram", args: ["mcp"] when Mixgram is installed globally.
Configuration
When you run mixgram mcp, config is merged from (lowest to highest priority):
- Defaults (paths relative to current directory)
- Config file —
./.mixgram/config.json(project) or~/.mixgram/config.json(user), or--config /path - Environment —
MIXGRAM_*(see table above) - CLI args —
--embeddings,--home, etc.
Example .mixgram/config.json (project or home):
{
"homeMemoryRoot": "~/.mixgram/docs",
"projectMemoryRoot": "./docs",
"sqlitePath": "~/.mixgram/index.db",
"embeddings": {
"enabled": true
}
}Paths in the config file are relative to the config file’s directory (project) or to the current directory when using ~/.mixgram/config.json. Project memory is resolved relative to the current working directory (repo) when you use a global config, so docs stay in the repo. The project memory folder name is configurable: set projectMemoryRoot to e.g. ./specs to get specs/architecture/, specs/decisions/, etc. Defaults:
| Option | Default | Description |
|--------|---------|-------------|
| homeMemoryRoot | ~/.mixgram/docs | Cross-project memory (Markdown files). |
| projectMemoryRoot | ./docs | Project memory in the current repo (e.g. docs/architecture/, docs/decisions/). You can use another folder name, e.g. ./specs for specs/architecture/. |
| sqlitePath | ~/.mixgram/index.db | SQLite index (FTS5 + optional vectors). |
| watch | true | Watch files and reindex on change. |
| indexing.reindexOnStartup | true | Full reindex when server starts. |
| embeddings.enabled | false | Enable async local embeddings and hybrid search. |
| search.defaultScopeMode | 'merged' | 'project-only' | 'home-only' | 'merged'. |
| search.ftsWeight / search.semanticWeight | 0.7 / 0.3 | Weights when embeddings are enabled. |
Example minimal override:
import { run } from 'mixgram/src/mcp/server.js';
await run({
homeMemoryRoot: '/data/memory',
projectMemoryRoot: '/path/to/repo/docs',
sqlitePath: '/data/.mixgram/index.db',
watch: true
});Text-only memory
Core flow: save Markdown-backed documents, search with FTS5, get context.
Save (create or update by topic)
// mem_save — create or update by topic_key
{
"title": "Use SQLite as derived index",
"type": "decision",
"scope": "project",
"project": "my-app",
"topic_key": "architecture/sqlite-derived-index",
"content": "SQLite will be the derived index over Markdown documents."
}- scope
project→ file under<repo>/docs/<type>/...(e.g.docs/architecture/,docs/decisions/). - scope
home→ file underhome/memory/...(cross-project). - Same
topic_key+ scope + project → update existing doc; otherwise create.
Search
// mem_search
{
"query": "derived SQLite index",
"scope_mode": "merged",
"project": "my-app",
"limit": 10
}scope_mode:
project-only— only project memory.home-only— only home memory.merged— both; project results ranked first whenprojectis set.
Response includes documentId, chunkId, title, topicKey, snippet, score, etc.
Context (for prompts)
// mem_context — recent or query-based
{
"query": "indexing decisions",
"project": "my-app",
"limit": 5
}Omit query or use * to get recent context only.
Get / update / delete
- mem_get_observation
{ "id": "<document-id>" }— full document content. - mem_update
{ "id": "<id>", "title": "...", "content": "..." }— update by id. - mem_delete
{ "id": "<id>", "hardDelete": false }— soft or hard delete.
Helpers
- mem_suggest_topic_key — suggest a stable
topic_keyfrom title/content/type. - mem_stats — counts (documents, sessions, prompts,
embeddings_enabled).
Example: one round-trip
1. mem_save → { success: true, id: "<10-char id>", path: "...", created: true }
2. mem_search({ query: "SQLite", project: "my-app" }) → { results: [ { title, snippet, ... } ] }
3. mem_get_observation({ id: "<10-char id>" }) → { title, content, type, scope, project, ... }Reindex, watch, and sessions
Reindex from disk
Useful after cloning a repo or editing Markdown by hand:
// mem_reindex
{ "full": true }full: true— rebuild entire index fromhome/**/*.mdanddocs/**/*.md(project memory in repo).full: false(default) — incremental (by mtime).
Manual edit example:
- Edit a
.mdfile underhome/memoryordocs/<type>/in your repo. - Call
mem_reindex({ full: true })(or run withwatch: trueso changes are picked up). mem_searchandmem_get_observationreflect the new content.
Watcher
Start the server with watch: true so that add/change/delete under the memory paths trigger reindex automatically.
Sessions and prompts
- mem_session_start
{ "project": "my-app" }→{ session_id } - mem_session_end
{ "session_id": "..." } - mem_session_summary
{ "session_id": "...", "content": "...", "title": "..." }— persists a summary as a memory document. - mem_save_prompt
{ "session_id": "...", "content": "..." }— store a prompt for the session. - mem_timeline
{ "observation_id": "..." }— before/focus/after in the same session.
Scope examples
// Only project memory
mem_search({ "query": "reindex", "scope_mode": "project-only", "project": "my-app" })
// Only home (cross-project)
mem_search({ "query": "reindex", "scope_mode": "home-only" })
// Both; project first
mem_search({ "query": "reindex", "scope_mode": "merged", "project": "my-app" })Optional semantic layer
When embeddings.enabled is true and the optional embedding stack is available:
- Saves are durable and searchable by text immediately; embeddings are queued and processed in the background.
- Search becomes hybrid: FTS + vector similarity, blended with
search.ftsWeightandsearch.semanticWeight(e.g. 0.7 and 0.3). - A worker runs in a separate process and processes the embedding queue (e.g. every 2 seconds). Query embeddings for hybrid search are also computed in that process via IPC, so the main MCP server never loads
@huggingface/transformersand is not affected by native/ONNX crashes in the embedding stack.
Enable embeddings
- Ensure optional deps are installed if your setup uses them (e.g.
@huggingface/transformers,sqlite-vec). - Configure:
await run({
embeddings: {
enabled: true,
similarityThreshold: 0.80,
maxRetries: 3
},
search: {
ftsWeight: 0.7,
semanticWeight: 0.3
}
});Behaviour
- mem_save / mem_update return as soon as the Markdown is written and the text index is updated; they do not wait for embeddings.
- mem_search uses FTS only until vectors exist for the matched chunks; then it uses the hybrid blend.
- If the embedding provider or sqlite-vec is unavailable, save and search still work (text-only).
Example: hybrid search
// Same tool; behaviour depends on config and whether vectors exist
mem_search({
"query": "persistir índices derivados",
"scope_mode": "merged",
"project": "my-app",
"limit": 5
})With embeddings enabled and vectors ready, results combine literal matches (FTS) and semantic similarity (e.g. paraphrases).
Example: indexing and search (embedding effectiveness)
To see the difference between FTS-only and semantic search in one run, run the demo script (uses a temp dir; first run may download the embedding model):
node examples/embedding-demo.js
# or
npm run example:embeddingThe script:
- Indexes three short documents (e.g. “memoria en Markdown”, “índice derivado”, “persistencia de sesión”).
- Processes the embedding queue so each document gets a vector.
- Runs FTS search for a query that does not appear literally in the text (e.g. “dónde se almacena la memoria del agente”) — FTS returns nothing.
- Runs semantic (vector) search for the same query — returns the relevant document by meaning.
Example output:
--- 3. FTS search (text only) ---
Query: "where is the agent memory stored"
(FTS finds no literal match; these words are not in the documents.)
--- 4. Semantic search (embeddings) ---
Query: "where is the agent memory stored"
- [Session persistence] similarity=0.856
Persist the session context across agent restarts. The important state...So: FTS only matches when the query words appear in the document; embeddings can retrieve documents that share the same meaning without sharing the same words.
Tests
npm testRuns black-box scenario tests for text-only memory, reindex/sessions/scope, and semantic fallback and error handling. Use npm run test:visual to see a narrative per scenario (inputs, outputs, snippets, and checks) for human review.
License
MIT
