@presidio-dev/agent-memory
v1.0.0
Published
Long-term memory engine for AI agents - Presidio fork of OpenMemory with additional features (based on OpenMemory v1.3.2)
Readme
@presidio-dev/agent-memory
real long-term memory for ai agents. not rag. not a vector db. self-hosted.
⚠️ About This Package
This is a Presidio fork of OpenMemory (originally by nullure), based on version 1.3.2. We maintain this fork to add custom features and enhancements specific to our use cases while staying up-to-date with upstream improvements.
Original Project: OpenMemory by nullure
License: Apache 2.0
This Fork: https://github.com/kishan0725/OpenMemory
This package provides the same powerful cognitive memory engine as OpenMemory, with additional features and improvements by Presidio.
@presidio-dev/agent-memory is a cognitive memory engine for llms and agents.
- 🧠 real long-term memory (not just embeddings in a table)
- 💾 self-hosted, local-first (sqlite / postgres)
- 🧩 integrations: mcp, claude desktop, cursor, windsurf
- 📥 sources: github, notion, google drive, onedrive, web crawler
- 🔍 explainable traces (see why something was recalled)
your model stays stateless. your app stops being amnesiac.
quick start
npm install @presidio-dev/agent-memoryimport { Memory } from "@presidio-dev/agent-memory"
const mem = new Memory()
await mem.add("user likes spicy food", { user_id: "u1" })
const results = await mem.search("food?", { user_id: "u1" })drop this into:
- node backends
- clis
- local tools
- anything that needs durable memory without running a separate service
that's it. you're now running a fully local cognitive memory engine 🎉
📥 sources (connectors)
ingest data from external sources directly into memory:
const github = await mem.source("github")
await github.connect({ token: "ghp_..." })
await github.ingest_all({ repo: "owner/repo" })available sources: github, notion, google_drive, google_sheets, google_slides, onedrive, web_crawler
features
✅ local-first - runs entirely on your machine, zero external dependencies
✅ multi-sector memory - episodic, semantic, procedural, emotional, reflective
✅ temporal knowledge graph - time-aware facts with validity periods
✅ memory decay - adaptive forgetting with sector-specific rates
✅ waypoint graph - associative recall paths for better retrieval
✅ explainable traces - see exactly why memories were recalled
✅ zero config - works out of the box with sensible defaults
cognitive sectors
openmemory automatically classifies content into 5 cognitive sectors:
| sector | description | examples | decay rate | |--------|-------------|----------|------------| | episodic | time-bound events & experiences | "yesterday i attended a conference" | medium | | semantic | timeless facts & knowledge | "paris is the capital of france" | very low | | procedural | skills, procedures, how-tos | "to deploy: build, test, push" | low | | emotional | feelings, sentiment, mood | "i'm excited about this project!" | high | | reflective | meta-cognition, insights | "i learn best through practice" | very low |
configuration
environment variables
# database
OM_DB_PATH=./data/om.db # sqlite file path (default: ./data/openmemory.sqlite)
OM_DB_URL=sqlite://:memory: # or use in-memory db
# embeddings
OM_EMBEDDINGS=ollama # synthetic | openai | gemini | ollama
OM_OLLAMA_URL=http://localhost:11434
OM_OLLAMA_MODEL=embeddinggemma # or nomic-embed-text, mxbai-embed-large
# openai
OPENAI_API_KEY=sk-...
OM_OPENAI_MODEL=text-embedding-3-small
# gemini
GEMINI_API_KEY=AIza...
# performance tier
OM_TIER=deep # fast | smart | deep | hybrid
OM_VEC_DIM=768 # vector dimension (must match model)
# metadata backend (optional)
OM_METADATA_BACKEND=postgres # sqlite (default) | postgres
OM_PG_HOST=localhost
OM_PG_PORT=5432
OM_PG_DB=openmemory
OM_PG_USER=postgres
OM_PG_PASSWORD=...
# vector backend (optional)
OM_VECTOR_BACKEND=valkey # default uses metadata backend
OM_VALKEY_URL=redis://localhost:6379programmatic usage
import { Memory } from '@presidio-dev/agent-memory';
const mem = new Memory('user-123'); // optional user_id
// add memories
await mem.add(
"user prefers dark mode",
{
tags: ["preference", "ui"],
created_at: Date.now()
}
);
// search
const results = await mem.search("user settings", {
user_id: "user-123",
limit: 10,
sectors: ["semantic", "procedural"]
});
// get by id
const memory = await mem.get("uuid-here");
// wipe all data (useful for testing)
await mem.wipe();performance tiers
fast- synthetic embeddings (no api calls), instantsmart- hybrid semantic + synthetic for balanced speed/accuracydeep- pure semantic embeddings for maximum accuracyhybrid- adaptive based on query complexity
mcp server
@presidio-dev/agent-memory includes an mcp server for integration with claude desktop, cursor, windsurf, and other mcp clients:
npx @presidio-dev/agent-memory serve --port 3000claude desktop / cursor / windsurf
{
"mcpServers": {
"agent-memory": {
"command": "npx",
"args": ["@presidio-dev/agent-memory", "serve"]
}
}
}available mcp tools:
openmemory_query- search memoriesopenmemory_store- add new memoriesopenmemory_list- list all memoriesopenmemory_get- get memory by idopenmemory_reinforce- reinforce a memory
examples
// multi-user support
const mem = new Memory();
await mem.add("alice likes python", { user_id: "alice" });
await mem.add("bob likes rust", { user_id: "bob" });
const alicePrefs = await mem.search("what does alice like?", { user_id: "alice" });
// returns python results only
// temporal filtering
const recent = await mem.search("user activity", {
startTime: Date.now() - 86400000, // last 24 hours
endTime: Date.now()
});
// sector-specific queries
const facts = await mem.search("company info", { sectors: ["semantic"] });
const howtos = await mem.search("deployment", { sectors: ["procedural"] });api reference
new Memory(user_id?: string)
create a new memory instance with optional default user_id.
async add(content: string, metadata?: object): Promise<hsg_mem>
store a new memory.
parameters:
content- text content to storemetadata- optional metadata object:user_id- user identifiertags- array of tag stringscreated_at- timestamp- any other custom fields
returns: memory object with id, primary_sector, sectors
async search(query: string, options?: object): Promise<hsg_q_result[]>
search for relevant memories.
parameters:
query- search textoptions:user_id- filter by userlimit- max results (default: 10)sectors- array of sectors to searchstartTime- filter memories after this timestampendTime- filter memories before this timestamp
returns: array of memory results with id, content, score, sectors, salience, tags, meta
async get(id: string): Promise<memory | null>
retrieve a memory by id.
async wipe(): Promise<void>
⚠️ danger: delete all memories, vectors, and waypoints. useful for testing.
license
apache 2.0
