llm-session-memory
v1.0.0
Published
Lightweight session memory management for LLM agents—store, recall, and forget facts with automatic vector encoding support
Downloads
107
Maintainers
Readme
LLM Session Memory
Lightweight session memory management for AI agents — store, recall, and forget facts with automatic persistence and optional vector encoding support.
Why This Tool?
LLM agents need memory:
- Store facts during conversation and retrieve them later
- Persist memories across sessions (file-based storage)
- Recall with relevance scoring (text similarity or vectors)
- Organize memories by category and tags
- Expire old memories automatically
- Zero dependencies — works everywhere Node.js runs
Features
✅ Session-based storage — Memories isolated by session ID
✅ Text similarity search — Find memories by relevance
✅ Vector support — Plugin custom vector encoders (OpenAI, local)
✅ TTL & expiration — Auto-delete old facts
✅ Category & tags — Organize memories hierarchically
✅ Importance scoring — Prioritize critical facts
✅ Access tracking — Know which facts are used most
✅ Compact & prune — Clean up low-value memories
✅ CLI + library — Terminal tool or Node.js module
✅ Lightweight — ~30KB unpacked, zero external deps
Installation
npm install -g llm-session-memoryOr use in your Node.js project:
npm install llm-session-memoryQuick Start
Store a Memory
session-memory store "OpenClaw runs on Proxmox with 64GB RAM" \
--category fact \
--importance 0.9 \
--tags homelab,infrastructureRecall Memories
session-memory recall "Proxmox memory" --limit 5Output:
Found 3 memories:
1. [fact] OpenClaw runs on Proxmox with 64GB RAM
ID: mem_1740467730000_abc123 | Importance: 0.9 | Accessed: 2 timesGet Memory by ID
session-memory get mem_1740467730000_abc123List All Memories
session-memory list --category fact --limit 10Show Stats
session-memory statsOutput:
Session: default
Total memories: 47
Total accesses: 156
Avg importance: 0.75
Storage: 8.34 KB
Last saved: 2026-02-25T10:15:30.000Z
By category:
fact: 32
decision: 8
procedure: 7Update a Memory
session-memory update mem_1740467730000_abc123 \
--text "Updated fact content" \
--importance 0.95Forget a Memory
session-memory forget mem_1740467730000_abc123Forget by Query
session-memory forget-query "old procedure"Usage as Library
Basic Setup
const SessionMemory = require('llm-session-memory');
const memory = new SessionMemory({
sessionId: 'my-agent',
maxMemories: 1000,
});Store a Fact
const fact = memory.store(
'User prefers Python over JavaScript',
{
category: 'preference',
importance: 0.7,
tags: ['user', 'language'],
}
);
console.log(fact.id); // mem_1740467730000_abc123Recall with Relevance
const results = memory.recall('programming language preference', {
limit: 5,
category: 'preference',
minImportance: 0.5,
});
results.forEach(mem => {
console.log(`${mem.text} (importance: ${mem.importance})`);
});Get Specific Memory
const mem = memory.get('mem_1740467730000_abc123');
console.log(mem.text);
console.log(mem.accessCount);Update Memory
const updated = memory.update('mem_1740467730000_abc123', {
text: 'User now prefers Rust',
importance: 0.9,
});List with Filters
const facts = memory.list({
category: 'preference',
tags: ['user'],
limit: 20,
});Statistics
const stats = memory.stats();
console.log(`Total memories: ${stats.totalMemories}`);
console.log(`Average importance: ${stats.avgImportance}`);
console.log(`Total accesses: ${stats.totalAccess}`);
console.log(`Storage size: ${stats.storageSize}`);Clean Up
// Remove expired and low-priority memories
const removed = memory.compact();
console.log(`Removed ${removed} memories`);
// Clear entire session
memory.clear();API Reference
Constructor Options
new SessionMemory({
sessionId: 'default', // Session identifier
dataDir: '~/.llm-memory', // Storage directory
maxMemories: 1000, // Max memories before pruning
indexMode: 'simple', // 'simple' or 'vector'
vectorEncoder: null, // Custom encoder function
})Methods
store(text, meta?)
Store a new memory.
Parameters:
text(string) — Memory contentmeta.category(string, default: 'other') — Memory categorymeta.importance(number, 0-1, default: 0.5) — Priority scoremeta.tags(string[], default: []) — Tags for organizingmeta.ttl(number) — Time-to-live in milliseconds
Returns: Memory object with ID
const mem = memory.store('Important fact', {
category: 'critical',
importance: 0.95,
tags: ['alert', 'security'],
ttl: 7 * 24 * 60 * 60 * 1000, // 7 days
});recall(query, options?)
Search memories by relevance.
Parameters:
query(string) — Search textoptions.limit(number, default: 10) — Max resultsoptions.category(string) — Filter by categoryoptions.minImportance(number, default: 0) — Minimum importanceoptions.tags(string[]) — Filter by tags
Returns: Array of matching memories
get(id)
Retrieve memory by ID (increments access count).
Returns: Memory object or null
list(options?)
List all memories with filters.
Options:
category(string) — Filter by categorytags(string[]) — Filter by tagslimit(number, default: 100) — Max returned
Returns: Array of memories sorted by importance then date
update(id, updates)
Update memory fields.
Parameters:
id(string) — Memory IDupdates(object) — Fields to update
Returns: Updated memory or null if not found
forget(id)
Delete memory by ID.
Returns: Count of forgotten memories (0 or 1)
forget(query, { isQuery: true })
Delete all memories matching query text.
Returns: Count of forgotten memories
stats()
Get session statistics.
Returns: Object with:
totalMemories— CountbyCategory— Object with counts per categorytotalAccess— Sum of all accessCountavgImportance— Average importance scorestorageSize— Human-readable file size
compact()
Remove expired memories and prune if over maxMemories limit.
Returns: Count of removed memories
clear()
Delete all memories in session.
Returns: Boolean success
Environment Variables
SESSION_ID— Session identifier (CLI only, default: 'default')
Example:
SESSION_ID=my-agent session-memory listUse Cases
1. AI Agent with Persistent Memory
const SessionMemory = require('llm-session-memory');
const { Anthropic } = require('@anthropic-ai/sdk');
const memory = new SessionMemory({ sessionId: 'agent-001' });
const anthropic = new Anthropic();
async function chat(userMessage) {
// Recall relevant memories
const memories = memory.recall(userMessage, { limit: 5 });
const context = memories.map(m => `- ${m.text}`).join('\n');
// Build prompt with memory
const messages = [{
role: 'user',
content: `Context:\n${context}\n\nUser: ${userMessage}`,
}];
// Get response
const response = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages,
});
// Store response in memory
memory.store(
`User: ${userMessage} | Agent: ${response.content[0].text}`,
{ category: 'conversation', importance: 0.6 }
);
return response.content[0].text;
}2. CLI Tool with Memory
#!/bin/bash
# Store fact about system config
session-memory store "Server has 16GB RAM and 8 CPU cores" \
--category system-config \
--importance 0.9 \
--tags server,hardware
# Later: recall when needed
session-memory recall "RAM CPU" --category system-config3. Multi-Session Architecture
// Different sessions for different contexts
const userMemory = new SessionMemory({ sessionId: 'user-42' });
const systemMemory = new SessionMemory({ sessionId: 'system' });
// Store user preferences separately
userMemory.store('User timezone: UTC+2', { category: 'preference' });
// Store system facts separately
systemMemory.store('Database URL: localhost:5432', { category: 'config' });4. Automatic Expiration
// Store temporary facts that expire after 1 hour
memory.store(
'Session token: abc123',
{
category: 'session',
ttl: 60 * 60 * 1000, // 1 hour
importance: 0.95,
}
);
// Later, compact removes expired
memory.compact();Performance
- Storage: ~1KB per 100 memories
- Recall time: <10ms for 1000 memories (simple text search)
- Save time: <5ms per write (file-based)
- Tested: 10,000+ memories, persistent sessions
Vector Encoder Integration
Using OpenAI Embeddings
const { Configuration, OpenAIApi } = require('openai');
const openai = new OpenAIApi(new Configuration({
apiKey: process.env.OPENAI_API_KEY,
}));
const vectorEncoder = async (text) => {
const response = await openai.createEmbedding({
model: 'text-embedding-3-small',
input: text,
});
return response.data.data[0].embedding;
};
// Add similarity function
vectorEncoder.similarity = (v1, v2) => {
// Cosine similarity
const dotProduct = v1.reduce((sum, a, i) => sum + a * v2[i], 0);
const mag1 = Math.sqrt(v1.reduce((sum, a) => sum + a * a, 0));
const mag2 = Math.sqrt(v2.reduce((sum, a) => sum + a * a, 0));
return dotProduct / (mag1 * mag2);
};
const memory = new SessionMemory({
sessionId: 'vector-agent',
vectorEncoder,
});Troubleshooting
"Cannot find module" Error
npm install -g llm-session-memoryMemories Not Persisting
Check that ~/.llm-memory directory is writable:
ls -la ~/.llm-memory/Slow Recall with Large Sessions
Compact to reduce memory count:
memory.compact();Storage Growing Too Large
Reduce maxMemories or increase importance threshold for storage.
License
MIT
Author
botfit ([email protected])
Repository
https://github.com/openclaw/llm-session-memory
Related Tools
- gateway-health-dashboard — Monitor OpenClaw gateway
- cron-monitor-js — Track cron job health
- llm-token-budget — Track token costs
- openclaw-config-validator — Validate configs
Built for AI agents, LLM applications, and OpenClaw users worldwide 🧠
