@waynesutton/agent-memory
v0.0.8-alpha.1
Published
A Convex Component for persistent, cloud-synced agent memory. Markdown-first memory backend for AI coding agents across CLIs and IDEs.
Maintainers
Readme
@waynesutton/agent-memory (Alpha)
Alpha — This package is under active development. APIs may change between releases.
A Convex Component for persistent, cloud-synced agent memory. Markdown-first memory backend for AI coding agents across CLIs and IDEs — with intelligent ingest, feedback loops, memory relations, and relevance decay.
What It Does
AI coding agents (Claude Code, Cursor, OpenCode, Codex, Conductor, Zed, VS Code Copilot, Pi) all use local, file-based memory systems (CLAUDE.md, .cursor/rules, AGENTS.md, etc.). These are siloed to one machine with no shared backend, no cross-tool sync, and no queryable search.
@waynesutton/agent-memory creates a cloud-synced, markdown-first memory backend as a Convex Component. It:
- Stores structured memories in Convex with full-text search and vector/semantic search
- Intelligently ingests raw conversations into deduplicated memories via LLM pipeline
- Exports memories in each tool's native format (
.cursor/rules/*.mdc,.claude/rules/*.md,AGENTS.md, etc.) - Syncs bidirectionally between local files and the cloud
- Tracks memory history (full audit trail of all changes)
- Supports feedback loops — agents rate memories as helpful or unhelpful
- Builds memory relations (graph connections between related memories)
- Applies relevance decay — stale, low-access memories lose priority over time
- Scopes memories by agent, session, user, and project
- Provides a read-only HTTP API with bearer token auth and rate limiting
- Exposes an MCP server with 14 tools for direct agent integration
- Works standalone with any Convex app — no dependency on
@convex-dev/agent
Table of Contents
- Installation
- Quick Start
- Convex Component Setup
- Client API
- Intelligent Ingest
- Memory History
- Feedback & Scoring
- Memory Relations
- Relevance Decay
- CLI Usage
- MCP Server
- Schema
- Memory Types
- Tool Format Support
- Vector Search
- Read-Only HTTP API
- Security
- Architecture
Installation
npm install @waynesutton/agent-memoryPeer dependency:
npm install convexUninstall
npm uninstall @waynesutton/agent-memoryThen remove the app.use(agentMemory) line from convex/convex.config.ts and delete your convex/memory.ts wrapper file (or whichever module you created with createApi).
Quick Start
1. Add the component to your Convex app
// convex/convex.config.ts
import { defineApp } from "convex/server";
import agentMemory from "@waynesutton/agent-memory/convex.config.js";
const app = defineApp();
app.use(agentMemory);
export default app;2. Expose public functions via createApi
Convex component functions are internal — they can't be called by external clients (CLI, MCP, HTTP). The createApi factory generates function definitions that you export as public functions:
// convex/memory.ts
import { query, mutation, action } from "./_generated/server";
import { components } from "./_generated/api";
import { createApi } from "@waynesutton/agent-memory";
const api = createApi(components.agentMemory);
// Queries
export const list = query(api.queries.list);
export const get = query(api.queries.get);
export const search = query(api.queries.search);
export const getContextBundle = query(api.queries.getContextBundle);
export const history = query(api.queries.history);
export const getRelations = query(api.queries.getRelations);
export const exportForTool = query(api.queries.exportForTool);
// Mutations
export const create = mutation(api.mutations.create);
export const update = mutation(api.mutations.update);
export const archive = mutation(api.mutations.archive);
export const restore = mutation(api.mutations.restore);
export const batchArchive = mutation(api.mutations.batchArchive);
export const addFeedback = mutation(api.mutations.addFeedback);
export const addRelation = mutation(api.mutations.addRelation);
export const importFromLocal = mutation(api.mutations.importFromLocal);
export const upsertProject = mutation(api.mutations.upsertProject);
// API key management (needed for HTTP API)
export const createApiKey = mutation(api.mutations.createApiKey);
export const revokeApiKey = mutation(api.mutations.revokeApiKey);
export const listApiKeys = query(api.queries.listApiKeys);
// Actions
export const ingest = action(api.actions.ingest);
export const semanticSearch = action(api.actions.semanticSearch);The CLI and MCP server call these functions by module name (default: "memory", matching the filename convex/memory.ts).
Note: You can also use the
AgentMemoryclass directly inside your own Convex functions for custom logic. See Client API.
3. Deploy and initialize
npx convex dev
npx agent-memory init --project my-app --name "My App"The init command registers your project in the Convex backend. It requires upsertProject to be exported (step 2 above).
Convex Component Setup
Standalone (no other components)
// convex/convex.config.ts
import { defineApp } from "convex/server";
import agentMemory from "@waynesutton/agent-memory/convex.config.js";
const app = defineApp();
app.use(agentMemory);
export default app;Alongside @convex-dev/agent
Both components coexist independently with isolated tables:
// convex/convex.config.ts
import { defineApp } from "convex/server";
import agentMemory from "@waynesutton/agent-memory/convex.config.js";
import agent from "@convex-dev/agent/convex.config.js";
const app = defineApp();
app.use(agentMemory); // isolated tables for persistent memories
app.use(agent); // isolated tables for threads/messages
export default app;You can load memories into an Agent's system prompt:
// convex/myAgent.ts
import { AgentMemory } from "@waynesutton/agent-memory";
import { Agent } from "@convex-dev/agent";
const memory = new AgentMemory(components.agentMemory, {
projectId: "my-app",
});
export const chat = action({
args: { message: v.string() },
handler: async (ctx, args) => {
const bundle = await memory.getContextBundle(ctx);
const memoryContext = bundle.pinned
.map((m) => `## ${m.title}\n${m.content}`)
.join("\n\n");
const myAgent = new Agent(components.agent, {
model: "claude-sonnet-4-6",
instructions: `You are a helpful assistant.\n\nRelevant memories:\n${memoryContext}`,
});
// ... use agent as normal
},
});Client API
Constructor
import { AgentMemory } from "@waynesutton/agent-memory";
const memory = new AgentMemory(components.agentMemory, {
projectId: "my-project", // required: unique project identifier
defaultScope: "project", // optional: "project" | "user" | "org"
userId: "user-123", // optional: for user-scoped memories
agentId: "claude-code", // optional: agent identifier
sessionId: "session-abc", // optional: session/conversation ID
embeddingApiKey: "sk-...", // optional: enables vector search
embeddingModel: "text-embedding-3-small", // optional
llmApiKey: "sk-...", // optional: enables intelligent ingest
llmModel: "gpt-4.1-nano", // optional: LLM for fact extraction
llmBaseUrl: "https://api.openai.com/v1", // optional: custom LLM endpoint
});Read Operations (query context)
// List with rich filters
const all = await memory.list(ctx);
const byAgent = await memory.list(ctx, { agentId: "claude-code" });
const bySession = await memory.list(ctx, { sessionId: "session-123" });
const bySource = await memory.list(ctx, { source: "mcp" });
const byTags = await memory.list(ctx, { tags: ["api", "auth"] });
const recent = await memory.list(ctx, { createdAfter: Date.now() - 86400000 });
// Get a single memory
const mem = await memory.get(ctx, "jh72k...");
// Full-text search
const results = await memory.search(ctx, "API authentication");
// Progressive context bundle (feedback-boosted priority)
const bundle = await memory.getContextBundle(ctx, {
activePaths: ["src/api/routes.ts"],
});
// Export as tool-native files
const files = await memory.exportForTool(ctx, "cursor");Write Operations (mutation context)
// Create (auto-records history)
const id = await memory.remember(ctx, {
title: "api-conventions",
content: "# API Conventions\n\n- Use camelCase\n- Return JSON",
memoryType: "instruction",
tags: ["api", "style"],
paths: ["src/api/**"],
priority: 0.9,
});
// Update (auto-records history)
await memory.update(ctx, id, { content: "Updated content", priority: 1.0 });
// Archive & restore (both record history)
await memory.forget(ctx, id);
await memory.restore(ctx, id);
// Batch operations
await memory.batchArchive(ctx, ["id1", "id2", "id3"]);
await memory.batchUpdate(ctx, [
{ memoryId: "id1", priority: 0.9 },
{ memoryId: "id2", tags: ["updated"] },
]);History & Audit Trail (query context)
// Get change history for a memory
const history = await memory.history(ctx, id);
// [{ event: "created", actor: "mcp", timestamp: ..., newContent: "..." }, ...]
// Get recent changes across the project
const recent = await memory.projectHistory(ctx, { limit: 20 });Feedback & Scoring (mutation/query context)
// Rate memories
await memory.addFeedback(ctx, id, "positive", { comment: "Very helpful rule" });
await memory.addFeedback(ctx, id, "negative", { comment: "Outdated information" });
// View feedback
const feedback = await memory.getFeedback(ctx, id);Memory Relations (mutation/query context)
// Create relationships
await memory.addRelation(ctx, memoryA, memoryB, "extends");
await memory.addRelation(ctx, memoryC, memoryA, "contradicts", { confidence: 0.9 });
// View relationships
const relations = await memory.getRelations(ctx, memoryA);
const contradictions = await memory.getRelations(ctx, memoryA, { relationship: "contradicts" });
// Remove a relationship
await memory.removeRelation(ctx, relationId);Access Tracking (mutation context)
// Record that memories were read (for relevance decay)
await memory.recordAccess(ctx, ["id1", "id2"]);Embedding Operations (action context)
// Single embedding
await memory.embed(ctx, id);
// Batch embed all un-embedded
const result = await memory.embedAll(ctx);
// Vector similarity search (falls back to full-text)
const results = await memory.semanticSearch(ctx, "how to handle auth errors");Intelligent Ingest
The core "smart memory" feature. Instead of manually creating memories, feed raw text and let the LLM pipeline:
- Extract discrete facts/learnings from conversations or notes
- Search existing memories for overlap (semantic deduplication)
- Decide per-fact: ADD new, UPDATE existing, DELETE contradicted, or SKIP
- Return a structured changelog of what happened
const memory = new AgentMemory(components.agentMemory, {
projectId: "my-app",
llmApiKey: process.env.OPENAI_API_KEY,
});
// In an action context
const result = await memory.ingest(ctx,
`User prefers TypeScript strict mode. The API should use camelCase.
Actually, we switched from REST to GraphQL last week.
The old REST convention docs are outdated.`
);
// result.results:
// [
// { event: "added", content: "User prefers TypeScript strict mode", memoryId: "..." },
// { event: "updated", content: "API uses camelCase with GraphQL", memoryId: "...", previousContent: "..." },
// { event: "deleted", content: "REST API conventions", memoryId: "...", previousContent: "..." },
// ]Custom Prompts
Override the extraction and decision prompts per-project:
await memory.ingest(ctx, rawText, {
customExtractionPrompt: "Extract only coding conventions and preferences...",
customUpdatePrompt: "When facts conflict, always prefer the newer one...",
});Or set them in project settings via the CLI:
npx agent-memory init --project my-app --name "My App"Memory History
Every create, update, archive, restore, and merge operation records a history entry. This provides a complete audit trail of how memories change over time.
const history = await memory.history(ctx, memoryId);
// Returns: MemoryHistoryEntry[]
// Each entry has: event, actor, timestamp, previousContent, newContentHistory is automatically cleaned up by a weekly cron job (entries older than 90 days are removed).
Feedback & Scoring
Agents and users can rate memories. Feedback affects the effective priority used in context bundles:
- Each
positivefeedback adds up to +0.05 priority (max +0.2 boost) - Each
negativefeedback subtracts up to -0.1 priority (max -0.5 penalty) very_negativecounts as negative with stronger signal
This means good memories naturally float to the top of context bundles, while bad ones sink — without manual priority management.
Memory Relations
Build a knowledge graph between memories:
| Relationship | Meaning |
|-------------|---------|
| extends | Memory B adds detail to Memory A |
| contradicts | Memory B conflicts with Memory A |
| replaces | Memory B supersedes Memory A |
| related_to | General association |
Relations are directional (from -> to) and queryable by direction and type.
Relevance Decay
Memories that aren't accessed lose priority over time, preventing stale memories from dominating context windows.
How it works:
- A daily cron job (3 AM UTC) checks all non-pinned memories
- Memories with low access count and old
lastAccessedAtget reduced priority - Decay follows an exponential half-life (configurable per-project, default 30 days)
- Pinned memories (priority >= 0.8) are never decayed
Enable per-project (via the upsertProject mutation exposed by createApi):
// In a mutation context (the upsertProject function is exported from convex/memory.ts)
await ctx.runMutation(api.memory.upsertProject, {
projectId: "my-app",
name: "My App",
settings: {
autoSync: false,
syncFormats: [],
decayEnabled: true,
decayHalfLifeDays: 30,
},
});CLI Usage
The CLI syncs memories between local tool files and Convex.
Environment Variables
export CONVEX_URL="https://your-deployment.convex.cloud"
export AGENT_MEMORY_MODULE="memory" # optional, defaults to "memory"The AGENT_MEMORY_MODULE (or --module flag) tells the CLI which Convex module exports the createApi functions. It must match your filename (e.g. convex/memory.ts = "memory").
Commands
npx agent-memory init
Detect tools in the current directory and register the project.
npx agent-memory init --project my-app --name "My App"Detects: Claude Code, Cursor, OpenCode, Codex, Conductor, Zed, VS Code Copilot, Pi.
npx agent-memory push
Push local memory files to Convex.
npx agent-memory push --project my-app
npx agent-memory push --project my-app --format claude-code # specific tool onlynpx agent-memory pull
Pull memories from Convex to local files.
npx agent-memory pull --project my-app --format cursor
npx agent-memory pull --project my-app --format claude-codenpx agent-memory list
List all memories in the terminal.
npx agent-memory list --project my-app
npx agent-memory list --project my-app --type instructionnpx agent-memory search <query>
Search memories from the terminal.
npx agent-memory search "API conventions" --project my-app --limit 5npx agent-memory mcp
Start the MCP server (see MCP Server section).
npx agent-memory mcp --project my-app
npx agent-memory mcp --project my-app --llm-api-key $OPENAI_API_KEY # enable ingestHook Integration
Auto-sync on Claude Code session start/end:
// .claude/settings.json
{
"hooks": {
"SessionStart": [{
"hooks": [{ "type": "command", "command": "npx agent-memory pull --format claude-code" }]
}],
"SessionEnd": [{
"hooks": [{ "type": "command", "command": "npx agent-memory push --format claude-code" }]
}]
}
}MCP Server
The MCP server runs as a local process, bridging AI tools to your Convex backend via stdio/JSON-RPC.
┌─────────────┐ stdio/JSON-RPC ┌──────────────────┐ ConvexHttpClient ┌─────────┐
│ Claude Code │ <────────────────> │ MCP Server │ <──────────────────> │ Convex │
│ Cursor │ │ (local process) │ │ Cloud │
│ VS Code │ │ npx agent-memory │ │ │
└─────────────┘ └──────────────────┘ └─────────┘Starting the Server
npx agent-memory mcp --project my-appOptions:
| Flag | Description |
|------|-------------|
| --project <id> | Project ID (default: "default") |
| --module <name> | Convex module with createApi exports (default: "memory") |
| --read-only | Disable write operations |
| --disable-tools <tools> | Comma-separated tool names to disable |
| --embedding-api-key <key> | Enable vector search |
| --llm-api-key <key> | Enable intelligent ingest |
| --llm-model <model> | LLM model for ingest (default: "gpt-4.1-nano") |
MCP Tools (14 total)
| Tool | Description |
|------|-------------|
| memory_remember | Save a new memory (with agent/session scoping) |
| memory_recall | Search memories by keyword (full-text) |
| memory_semantic_recall | Search memories by meaning (vector) |
| memory_list | List memories with filters (agent, session, source, tags) |
| memory_context | Get context bundle (pinned + relevant) |
| memory_forget | Archive a memory |
| memory_restore | Restore an archived memory |
| memory_update | Update an existing memory |
| memory_history | View change audit trail |
| memory_feedback | Rate a memory as helpful/unhelpful |
| memory_relate | Create relationship between memories |
| memory_relations | View memory graph connections |
| memory_batch_archive | Archive multiple memories at once |
| memory_ingest | Intelligently extract memories from raw text |
MCP Resources
| URI | Description |
|-----|-------------|
| memory://project/{id}/pinned | High-priority memories auto-loaded at session start |
Configuration in Claude Code
// .claude/settings.json
{
"mcpServers": {
"agent-memory": {
"command": "npx",
"args": [
"agent-memory", "mcp",
"--project", "my-app",
"--llm-api-key", "${env:OPENAI_API_KEY}"
],
"env": {
"CONVEX_URL": "${env:CONVEX_URL}",
"OPENAI_API_KEY": "${env:OPENAI_API_KEY}"
}
}
}
}Configuration in Cursor
// .cursor/mcp.json
{
"mcpServers": {
"agent-memory": {
"command": "npx",
"args": [
"agent-memory", "mcp",
"--project", "my-app",
"--llm-api-key", "${env:OPENAI_API_KEY}"
],
"env": {
"CONVEX_URL": "${env:CONVEX_URL}",
"OPENAI_API_KEY": "${env:OPENAI_API_KEY}"
}
}
}
}Schema
The component creates 9 isolated tables in your Convex deployment:
memories
| Field | Type | Description |
|-------|------|-------------|
| projectId | string | Project identifier |
| scope | "project" \| "user" \| "org" | Visibility scope |
| userId | string? | Owner for user-scoped memories |
| agentId | string? | Agent that created/owns this memory |
| sessionId | string? | Session/conversation ID |
| title | string | Short title/slug |
| content | string | Markdown content |
| memoryType | MemoryType | Category (see below) |
| tags | string[] | Searchable tags |
| paths | string[]? | File glob patterns for relevance matching |
| priority | number? | 0-1 scale (>= 0.8 = pinned) |
| source | string? | Origin tool ("claude-code", "cursor", "mcp", "ingest", etc.) |
| checksum | string | FNV-1a hash for change detection |
| archived | boolean | Soft delete flag |
| embeddingId | Id? | Link to vector embedding |
| accessCount | number? | Times this memory was accessed |
| lastAccessedAt | number? | Timestamp of last access |
| positiveCount | number? | Positive feedback count |
| negativeCount | number? | Negative feedback count |
Indexes: by_project, by_project_scope, by_project_title, by_type_priority, by_agent, by_session, by_source, by_last_accessed
Search indexes: search_content (full-text on content), search_title (full-text on title)
embeddings
Vector embeddings for semantic search. Linked to memories via memoryId.
Vector index: by_embedding (1536 dimensions, OpenAI-compatible)
projects
Project registry with settings for sync, custom prompts, and decay configuration.
syncLog
Tracks push/pull sync events for conflict detection.
memoryHistory
Audit trail of all memory changes: created, updated, archived, restored, merged.
memoryFeedback
Quality signals from agents/users: positive, negative, very_negative with optional comments.
memoryRelations
Directional graph connections between memories with relationship types and metadata.
apiKeys
Bearer tokens for the read-only HTTP API. Stores hashed keys with per-key permissions, rate limit overrides, and expiry.
rateLimitTokens
Fixed-window token counters for HTTP API rate limiting. Cleaned up hourly by cron.
Memory Types
| Type | Description | Maps To |
|------|-------------|---------|
| instruction | Rules and conventions | .claude/rules/, .cursor/rules/, AGENTS.md |
| learning | Auto-discovered patterns | Claude Code auto-memory |
| reference | Architecture docs, API specs | Reference documentation |
| feedback | Corrections, preferences | User feedback on behavior |
| journal | Session logs | OpenCode journal entries |
Tool Format Support
The component reads from and writes to 8 tool formats:
| Tool | Parser Reads | Formatter Writes |
|------|-------------|-----------------|
| Claude Code | .claude/rules/*.md (YAML frontmatter) | .claude/rules/<title>.md |
| Cursor | .cursor/rules/*.mdc (YAML frontmatter) | .cursor/rules/<title>.mdc |
| OpenCode | AGENTS.md (## sections) | AGENTS.md or journal/<title>.md |
| Codex | AGENTS.md, AGENTS.override.md | AGENTS.md or <dir>/AGENTS.md |
| Conductor | .conductor/rules/*.md | .conductor/rules/<title>.md |
| Zed | .zed/rules/*.md | .zed/rules/<title>.md |
| VS Code Copilot | .github/copilot-instructions.md, .copilot/rules/*.md | .github/copilot-instructions.md |
| Pi | .pi/rules/*.md | .pi/rules/<title>.md |
Cross-Tool Sync Example
Push from Claude Code, pull as Cursor rules:
# In your project directory with .claude/rules/ files
npx agent-memory push --project my-app --format claude-code
# On another machine or for another tool
npx agent-memory pull --project my-app --format cursor
# Creates .cursor/rules/*.mdc files with proper frontmatterVector Search
Vector search is opt-in. Everything works with full-text search by default.
Enabling Vector Search
Pass an OpenAI API key when configuring:
const memory = new AgentMemory(components.agentMemory, {
projectId: "my-app",
embeddingApiKey: process.env.OPENAI_API_KEY,
embeddingModel: "text-embedding-3-small", // default
});Or via CLI/MCP:
npx agent-memory mcp --project my-app --embedding-api-key $OPENAI_API_KEYBackfilling Embeddings
// In an action context
const result = await memory.embedAll(ctx);
console.log(`Embedded ${result.embedded} memories, skipped ${result.skipped}`);Fallback Behavior
If no embedding API key is provided, semanticSearch automatically falls back to full-text search. No errors, no configuration changes needed.
Read-Only HTTP API
Expose memories as REST endpoints for dashboards, CI/CD, and external integrations. The component provides a MemoryHttpApi class that generates httpAction handlers — your app mounts them on its own httpRouter.
Setup
// convex/http.ts
import { httpRouter } from "convex/server";
import { MemoryHttpApi } from "@waynesutton/agent-memory/http";
import { components } from "./_generated/api";
const http = httpRouter();
const memoryApi = new MemoryHttpApi(components.agentMemory, {
corsOrigins: ["https://myapp.com"], // optional, defaults to ["*"]
});
memoryApi.mount(http, "/api/memory");
export default http;Creating API Keys
If you exported createApiKey via createApi (see Quick Start), you can call it directly:
npx convex run memory:createApiKey '{"projectId":"my-app","name":"Dashboard key","permissions":["list","search","context"]}'For auth-gated key creation, wrap it in your own mutation:
// convex/memory.ts — behind your app's own auth
import { AgentMemory } from "@waynesutton/agent-memory";
const memory = new AgentMemory(components.agentMemory, { projectId: "my-app" });
export const createReadKey = mutation({
args: {},
handler: async (ctx) => {
const identity = await ctx.auth.getUserIdentity();
if (!identity) throw new Error("Not authenticated");
return await memory.createApiKey(ctx, {
name: "Dashboard key",
permissions: ["list", "search", "context"],
});
},
});Using the API
# List memories
curl -H "Authorization: Bearer am_<key>" \
https://your-deployment.convex.cloud/api/memory/list
# Search
curl -H "Authorization: Bearer am_<key>" \
"https://your-deployment.convex.cloud/api/memory/search?q=API+conventions"
# Get context bundle
curl -H "Authorization: Bearer am_<key>" \
https://your-deployment.convex.cloud/api/memory/contextEndpoints
| Path | Permission | Description |
|------|------------|-------------|
| /list | list | List memories with filters |
| /get?id=<id> | get | Get single memory |
| /search?q=<query> | search | Full-text search |
| /context | context | Progressive context bundle |
| /export?format=<format> | export | Export in tool format |
| /history?id=<id> | history | Memory audit trail |
| /relations?id=<id> | relations | Memory graph |
Rate Limiting
Self-contained fixed-window token bucket (no external dependency):
- Default: 100 requests per 60 seconds
- Configurable: per-key override > per-project setting > global default
- Returns
429withretryAfterMswhen exceeded - Old window records cleaned up hourly by cron
Available Permissions
list, get, search, context, export, history, relations
Security
6 layers of protection:
| Layer | What | How |
|-------|------|-----|
| Deployment URL | Gate to your Convex backend | CONVEX_URL env var required. Each app has its own isolated deployment. |
| Auth Token | Authenticates the caller | Optional CONVEX_AUTH_TOKEN for production/team use. |
| API Keys | HTTP API access control | Bearer tokens with per-key permissions, expiry, and rate limits. Keys stored as hashes. |
| Project Scope | Isolates by project | --project flag. MCP server and API keys only access that project's memories. |
| Tool Disabling | Restrict operations | --read-only or --disable-tools flags for fine-grained control. |
| Convex Isolation | Runtime sandboxing | Component tables are isolated. Queries can't write. Mutations are transactional. |
Examples
# Full access (default)
npx agent-memory mcp --project my-app
# Read-only (no write/delete/ingest tools)
npx agent-memory mcp --project my-app --read-only
# Disable specific tools
npx agent-memory mcp --project my-app --disable-tools memory_forget,memory_ingestMCP Config with Secrets
{
"mcpServers": {
"agent-memory": {
"command": "npx",
"args": ["agent-memory", "mcp", "--project", "my-app"],
"env": {
"CONVEX_URL": "${env:CONVEX_URL}",
"CONVEX_AUTH_TOKEN": "${env:CONVEX_AUTH_TOKEN}"
}
}
}
}Architecture
@waynesutton/agent-memory
├── src/
│ ├── component/ # Convex backend (defineComponent)
│ │ ├── schema.ts # 9 tables: memories, embeddings, projects, syncLog,
│ │ │ # memoryHistory, memoryFeedback, memoryRelations,
│ │ │ # apiKeys, rateLimitTokens
│ │ ├── mutations.ts # CRUD + batch + feedback + relations + history tracking
│ │ ├── queries.ts # list, search, context bundle, history, feedback, relations
│ │ ├── actions.ts # embeddings, semantic search, intelligent ingest pipeline
│ │ ├── apiKeyMutations.ts # API key create/revoke, rate limit consume
│ │ ├── apiKeyQueries.ts # API key validation, listing
│ │ ├── crons.ts # Daily relevance decay + weekly history cleanup + hourly rate limit cleanup
│ │ ├── cronActions.ts # Internal actions called by cron jobs
│ │ ├── cronQueries.ts # Internal queries for cron job data
│ │ ├── format.ts # Memory -> tool-native file conversion
│ │ └── checksum.ts # FNV-1a content hashing
│ ├── client/
│ │ ├── index.ts # AgentMemory class (public API)
│ │ ├── api.ts # createApi factory (generates consumer function definitions)
│ │ └── http.ts # MemoryHttpApi class (read-only HTTP API)
│ ├── cli/
│ │ ├── index.ts # CLI: init, push, pull, list, search, mcp
│ │ ├── sync.ts # Push/pull sync logic
│ │ └── parsers/ # 8 tool parsers (local files -> memories)
│ ├── mcp/
│ │ └── server.ts # MCP server (14 tools + resources)
│ ├── shared.ts # Shared types and validators
│ └── test.ts # Test helper for convex-test
└── example/
└── convex/ # Example Convex appKey Design Principles
- Works without any API key — full-text search, CRUD, sync, export, history, feedback, and relations all work with zero external dependencies
- Vector search is opt-in — pass
embeddingApiKeyto enable; falls back to full-text automatically - Intelligent ingest is opt-in — pass
llmApiKeyto enable LLM-powered fact extraction and deduplication - Standalone — no dependency on
@convex-dev/agentor any other component - Markdown-first — memories are markdown documents with optional YAML frontmatter
- Checksum-based sync — only changed content is pushed/pulled (FNV-1a hashing)
- Progressive disclosure — context bundles tier memories as pinned/relevant/available
- Feedback-boosted scoring — positive feedback raises priority; negative feedback lowers it
- Self-maintaining — cron jobs handle relevance decay and history cleanup automatically
- Multi-dimensional scoping — project + user + agent + session isolation
Why Convex
Built as a Convex Component, agent-memory inherits powerful guarantees that file-based memory systems (CLAUDE.md, .cursor/rules) cannot provide:
- Real-time reactive queries — memories update live across all connected clients. When one agent saves a memory, every other agent sees it instantly without polling or pulling.
- ACID transactional writes — every create, update, and archive is fully transactional. No partial saves, no corrupted state, no merge conflicts.
- Multi-agent concurrency — multiple agents and humans can read and write simultaneously across machines with full consistency guarantees. No locking, no race conditions.
- Zero infrastructure — no database to provision, no servers to manage. Convex handles storage, indexing, full-text search, and vector search out of the box.
- Isolated component tables — the 9 memory tables live in their own namespace (
agentMemory:), completely isolated from your app's tables. No schema conflicts, no migrations to coordinate.
Testing
Using in Your Tests
import { convexTest } from "convex-test";
import agentMemoryTest from "@waynesutton/agent-memory/test";
const t = convexTest();
agentMemoryTest.register(t);Running Component Tests
npm test # run all tests
npm run test:watch # watch modeLicense
Apache-2.0
