@agents-eco/agentic-memory
v0.1.0
Published
Graph-based agent memory — short-term and long-term memory with local or Voyage AI backends. Built for agents.eco.
Downloads
72
Maintainers
Readme
npm install @agents-eco/agentic-memoryWhy This Exists
Most agent memory is either a flat conversation buffer or an opaque vector database. Neither captures how memory actually works.
Agentic Memory is a graph-based memory system that separates short-term and long-term memory, with typed nodes, weighted relationships, decay, reinforcement, and hybrid search.
- Short-Term Memory (STM) — Bounded working context. Recent items with automatic expiry. Fast keyword search.
- Long-Term Memory (LTM) — Persistent graph with typed nodes (episodic, semantic, entity, goal, etc.) and weighted edges (causal, temporal, hierarchical). Supports decay and reinforcement.
- Two backends — Local (zero dependencies, offline) or Voyage AI (high-quality neural embeddings).
- Graph traversal — Find related memories by walking the graph, not just by vector similarity.
- Human-readable persistence — Stored as JSON files you can inspect and version control.
Quick Start
Local Backend (no API key needed)
import { AgenticMemory } from "@agents-eco/agentic-memory";
const memory = new AgenticMemory({ backend: "local" });
// Add memories
await memory.add("User's name is Alice", "semantic", 0.8);
await memory.add("Alice prefers dark mode", "semantic", 0.6);
await memory.addEpisode("User asked about the weather in NYC");
await memory.addGoal("Help Alice plan her trip to Tokyo");
// Search
const results = await memory.search("What is the user's name?");
console.log(results[0].node.content); // "User's name is Alice"
// Build context for LLM prompt injection
const context = await memory.buildContext("Tell me about Alice");
console.log(context);Voyage AI Backend (high-quality embeddings)
import { AgenticMemory } from "@agents-eco/agentic-memory";
const memory = new AgenticMemory({
backend: "voyage",
voyageApiKey: process.env.VOYAGE_API_KEY!,
voyageModel: "voyage-3-lite", // 512 dims, fast and cheap
});
await memory.add("The project deadline is March 15th", "semantic", 0.9);
const results = await memory.search("When is the deadline?");Architecture
┌─────────────────────────────────────────────────────────┐
│ AgenticMemory │
│ │
│ ┌─────────────────────┐ ┌──────────────────────────┐ │
│ │ Short-Term Memory │ │ Long-Term Memory │ │
│ │ │ │ │ │
│ │ Bounded buffer │ │ ┌─────────────────────┐ │ │
│ │ TTL-based expiry │ │ │ Memory Graph │ │ │
│ │ Keyword search │ │ │ │ │ │
│ │ Importance ranking │ │ │ Nodes (typed): │ │ │
│ │ │ │ │ - episodic │ │ │
│ │ ┌───────────────┐ │ │ │ - semantic │ │ │
│ │ │ Consolidation │──┼──┼─▶│ - entity │ │ │
│ │ │ (STM → LTM) │ │ │ │ - goal │ │ │
│ │ └───────────────┘ │ │ │ - observation │ │ │
│ │ │ │ │ - procedural │ │ │
│ └─────────────────────┘ │ │ - emotional │ │ │
│ │ │ │ │ │
│ │ │ Edges (weighted): │ │ │
│ │ │ - related_to │ │ │
│ │ │ - caused_by │ │ │
│ │ │ - leads_to │ │ │
│ │ │ - part_of │ │ │
│ │ │ - similar_to │ │ │
│ │ │ - mentioned_in │ │ │
│ │ └─────────────────────┘ │ │
│ │ │ │
│ │ Decay + Reinforcement │ │
│ │ Hybrid Search │ │
│ │ Graph Traversal │ │
│ └──────────────────────────┘ │
│ │
│ ┌──────────────────┐ ┌──────────────────────────────┐ │
│ │ Embedding Backend │ │ Storage Backend │ │
│ │ │ │ │ │
│ │ Local (hash) │ │ Local (JSON files) │ │
│ │ Voyage AI │ │ Custom (implement iface) │ │
│ └──────────────────┘ └──────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘Memory Types
Node Types
| Type | Description | Example |
|------|-------------|---------|
| episodic | Specific events and conversations | "User asked about weather in NYC" |
| semantic | Facts, knowledge, extracted info | "User's name is Alice" |
| entity | People, places, things | "Alice", "Tokyo", "Project X" |
| goal | Objectives, tasks, intentions | "Help user plan trip to Tokyo" |
| observation | Agent observations about the world | "User seems frustrated today" |
| procedural | How-to, skills, patterns | "To check weather, use the weather API" |
| emotional | Sentiment, preferences, reactions | "User prefers concise responses" |
Edge Types (Relations)
| Relation | Description |
|----------|-------------|
| related_to | General association |
| caused_by | Causal relationship |
| leads_to | Sequential / temporal |
| part_of | Hierarchical |
| contradicts | Conflicting information |
| reinforces | Supporting information |
| derived_from | Extracted / inferred from |
| similar_to | Semantic similarity |
| mentioned_in | Entity mentioned in episode |
| precedes / follows | Temporal ordering |
STM (Short-Term Memory)
The working context buffer. Bounded, fast, and ephemeral.
import { ShortTermMemory } from "@agents-eco/agentic-memory";
const stm = new ShortTermMemory({
capacity: 20, // max items
ttlMs: 30 * 60 * 1000, // 30 min expiry
});
stm.add("User just asked about pricing", "episodic", 0.7);
stm.add("Current topic is billing", "observation", 0.5);
// Get recent context
const recent = stm.getRecent(5);
// Search
const results = stm.search("pricing");
// Build context string for prompt injection
const context = stm.buildContext();LTM (Long-Term Memory)
Persistent graph with decay, reinforcement, and hybrid search.
import { LongTermMemory } from "@agents-eco/agentic-memory";
import { LocalEmbedding } from "@agents-eco/agentic-memory";
const ltm = new LongTermMemory(
{ decayRate: 0.01, minImportance: 0.1, maxNodes: 10000 },
new LocalEmbedding()
);
// Add memories
const fact = await ltm.add("Alice lives in New York", "semantic", 0.7);
const entity = await ltm.add("Alice", "entity", 0.6);
ltm.link(fact.id, entity.id, "mentioned_in");
// Add facts with auto entity linking
const { node, entityNodes } = await ltm.addFact(
"Alice is a software engineer at Acme Corp",
["Alice", "Acme Corp"]
);
// Search with graph traversal
const results = await ltm.search("Where does Alice work?", {
limit: 5,
includeRelated: true,
traversalDepth: 2,
});
// Decay old memories
ltm.decay();Consolidation (STM to LTM)
Important short-term memories are promoted to long-term storage.
const memory = new AgenticMemory({ backend: "local" });
// Add several memories to STM
await memory.add("User mentioned they like sushi", "semantic", 0.7);
await memory.add("User asked about Tokyo restaurants", "episodic", 0.5);
await memory.add("Random small talk", "episodic", 0.2);
// Consolidate important items to LTM
const count = await memory.consolidate(0.4); // min importance threshold
console.log(`Consolidated ${count} memories to LTM`);
// "Random small talk" stays in STM (too low importance)
// The other two are now in the LTM graph with temporal linksHybrid Search
Combines vector similarity, keyword matching, recency, and importance scoring.
const results = await memory.search("What does Alice like?", {
limit: 5,
types: ["semantic", "episodic"], // filter by type
minScore: 0.3, // minimum relevance
includeRelated: true, // include graph neighbors
traversalDepth: 2, // how far to walk the graph
method: "hybrid", // vector + keyword + recency
});
for (const r of results) {
console.log(`[${r.method}] (${r.score.toFixed(2)}) ${r.node.content}`);
if (r.related) {
for (const rel of r.related) {
console.log(` └─ ${rel.content}`);
}
}
}Persistence
Memory is saved as JSON files you can inspect and version control.
const memory = new AgenticMemory({
backend: "local",
storageDir: "./.agent/memory",
namespace: "my-agent", // creates graph-my-agent.json
});
// Auto-loads on first operation
await memory.add("Something important", "semantic", 0.8);
// Explicit save
await memory.save();
// Stats
console.log(memory.stats());
// { stm: 1, ltm: { nodes: 1, edges: 0, byType: { semantic: 1 } } }Custom Backends
Custom Embedding Backend
import { EmbeddingBackend } from "@agents-eco/agentic-memory";
class OpenAIEmbedding implements EmbeddingBackend {
name = "openai";
dimension = 1536;
async embed(text: string): Promise<number[]> {
const res = await fetch("https://api.openai.com/v1/embeddings", {
method: "POST",
headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json" },
body: JSON.stringify({ model: "text-embedding-3-small", input: text }),
});
const data = await res.json();
return data.data[0].embedding;
}
async embedBatch(texts: string[]): Promise<number[][]> {
// Similar batch implementation
}
}
const memory = new AgenticMemory({
backend: "local",
embedding: new OpenAIEmbedding(),
});Custom Storage Backend
import { StorageBackend, SerializedGraph } from "@agents-eco/agentic-memory";
class RedisStorage implements StorageBackend {
name = "redis";
async save(graph: SerializedGraph): Promise<void> {
await redis.set("memory:graph", JSON.stringify(graph));
}
async load(): Promise<SerializedGraph | null> {
const raw = await redis.get("memory:graph");
return raw ? JSON.parse(raw) : null;
}
async exists(): Promise<boolean> {
return (await redis.exists("memory:graph")) === 1;
}
}
const memory = new AgenticMemory({
backend: "local",
storage: new RedisStorage(),
});Integration with Open Agentic Framework
Use as the memory backend for @agents-eco/open-agentic-framework:
import { Agent } from "@agents-eco/open-agentic-framework";
import { AgenticMemory } from "@agents-eco/agentic-memory";
const memory = new AgenticMemory({ backend: "local" });
// Implement the MemoryStore interface
const memoryStore = {
async add(entry) {
const { stmEntry } = await memory.add(entry.content, entry.type as any, 0.5);
return { id: stmEntry.id, content: entry.content, type: entry.type, timestamp: stmEntry.createdAt };
},
async search(query, limit) {
const results = await memory.search(query, { limit });
return results.map((r) => ({
id: r.node.id,
content: r.node.content,
type: r.node.type,
timestamp: r.node.createdAt,
}));
},
async list(limit) {
const entries = memory.stm.getRecent(limit);
return entries.map((e) => ({
id: e.id,
content: e.content,
type: e.type,
timestamp: e.createdAt,
}));
},
async clear() {
await memory.clear();
},
};
const agent = new Agent({
name: "memory-agent",
systemPrompt: "You remember everything.",
provider: { name: "venice", apiKey: "...", baseUrl: "https://api.venice.ai/api/v1", defaultModel: "qwen3-4b" },
memory: memoryStore,
});API Reference
AgenticMemory
| Method | Description |
|--------|-------------|
| add(content, type?, importance?, metadata?) | Add to STM (and LTM if important) |
| addEpisode(content, importance?) | Add episodic memory with temporal linking |
| addFact(content, entities?, importance?) | Add semantic memory with entity extraction |
| addObservation(content, importance?) | Add an observation |
| addGoal(content, importance?) | Add a goal |
| link(sourceId, targetId, relation, weight?) | Create a relationship in LTM |
| search(query, options?) | Hybrid search across STM + LTM |
| buildContext(query?) | Build context string for prompt injection |
| consolidate(minImportance?) | Promote important STM entries to LTM |
| decay() | Apply decay to LTM nodes |
| save() | Persist to storage |
| load() | Load from storage |
| clear() | Clear all memory |
| stats() | Get memory statistics |
SearchOptions
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| limit | number | 5 | Max results |
| types | MemoryNodeType[] | all | Filter by node type |
| minScore | number | 0.0 | Minimum relevance score |
| includeRelated | boolean | false | Include graph neighbors |
| traversalDepth | number | 1 | Graph walk depth |
| method | "vector" \| "keyword" \| "hybrid" | "hybrid" | Search method |
Contributing
We welcome contributions. This project is early and there is room to shape its direction.
- Add a storage backend — SQLite, Redis, PostgreSQL, S3
- Add an embedding backend — OpenAI, Cohere, local transformers
- Improve search — better scoring, re-ranking, query expansion
- Visualization — graph visualization tools for debugging memory
- Report issues — bug reports and feature requests help us prioritize
License
MIT — agents.eco
