npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llm-session-memory

v1.0.0

Published

Lightweight session memory management for LLM agents—store, recall, and forget facts with automatic vector encoding support

Downloads

107

Readme

LLM Session Memory

Lightweight session memory management for AI agents — store, recall, and forget facts with automatic persistence and optional vector encoding support.

Why This Tool?

LLM agents need memory:

  • Store facts during conversation and retrieve them later
  • Persist memories across sessions (file-based storage)
  • Recall with relevance scoring (text similarity or vectors)
  • Organize memories by category and tags
  • Expire old memories automatically
  • Zero dependencies — works everywhere Node.js runs

Features

Session-based storage — Memories isolated by session ID
Text similarity search — Find memories by relevance
Vector support — Plugin custom vector encoders (OpenAI, local)
TTL & expiration — Auto-delete old facts
Category & tags — Organize memories hierarchically
Importance scoring — Prioritize critical facts
Access tracking — Know which facts are used most
Compact & prune — Clean up low-value memories
CLI + library — Terminal tool or Node.js module
Lightweight — ~30KB unpacked, zero external deps

Installation

npm install -g llm-session-memory

Or use in your Node.js project:

npm install llm-session-memory

Quick Start

Store a Memory

session-memory store "OpenClaw runs on Proxmox with 64GB RAM" \
  --category fact \
  --importance 0.9 \
  --tags homelab,infrastructure

Recall Memories

session-memory recall "Proxmox memory" --limit 5

Output:

Found 3 memories:

1. [fact] OpenClaw runs on Proxmox with 64GB RAM
   ID: mem_1740467730000_abc123 | Importance: 0.9 | Accessed: 2 times

Get Memory by ID

session-memory get mem_1740467730000_abc123

List All Memories

session-memory list --category fact --limit 10

Show Stats

session-memory stats

Output:

Session: default
Total memories: 47
Total accesses: 156
Avg importance: 0.75
Storage: 8.34 KB
Last saved: 2026-02-25T10:15:30.000Z

By category:
  fact: 32
  decision: 8
  procedure: 7

Update a Memory

session-memory update mem_1740467730000_abc123 \
  --text "Updated fact content" \
  --importance 0.95

Forget a Memory

session-memory forget mem_1740467730000_abc123

Forget by Query

session-memory forget-query "old procedure"

Usage as Library

Basic Setup

const SessionMemory = require('llm-session-memory');

const memory = new SessionMemory({
  sessionId: 'my-agent',
  maxMemories: 1000,
});

Store a Fact

const fact = memory.store(
  'User prefers Python over JavaScript',
  {
    category: 'preference',
    importance: 0.7,
    tags: ['user', 'language'],
  }
);

console.log(fact.id);  // mem_1740467730000_abc123

Recall with Relevance

const results = memory.recall('programming language preference', {
  limit: 5,
  category: 'preference',
  minImportance: 0.5,
});

results.forEach(mem => {
  console.log(`${mem.text} (importance: ${mem.importance})`);
});

Get Specific Memory

const mem = memory.get('mem_1740467730000_abc123');
console.log(mem.text);
console.log(mem.accessCount);

Update Memory

const updated = memory.update('mem_1740467730000_abc123', {
  text: 'User now prefers Rust',
  importance: 0.9,
});

List with Filters

const facts = memory.list({
  category: 'preference',
  tags: ['user'],
  limit: 20,
});

Statistics

const stats = memory.stats();
console.log(`Total memories: ${stats.totalMemories}`);
console.log(`Average importance: ${stats.avgImportance}`);
console.log(`Total accesses: ${stats.totalAccess}`);
console.log(`Storage size: ${stats.storageSize}`);

Clean Up

// Remove expired and low-priority memories
const removed = memory.compact();
console.log(`Removed ${removed} memories`);

// Clear entire session
memory.clear();

API Reference

Constructor Options

new SessionMemory({
  sessionId: 'default',           // Session identifier
  dataDir: '~/.llm-memory',       // Storage directory
  maxMemories: 1000,              // Max memories before pruning
  indexMode: 'simple',            // 'simple' or 'vector'
  vectorEncoder: null,            // Custom encoder function
})

Methods

store(text, meta?)

Store a new memory.

Parameters:

  • text (string) — Memory content
  • meta.category (string, default: 'other') — Memory category
  • meta.importance (number, 0-1, default: 0.5) — Priority score
  • meta.tags (string[], default: []) — Tags for organizing
  • meta.ttl (number) — Time-to-live in milliseconds

Returns: Memory object with ID

const mem = memory.store('Important fact', {
  category: 'critical',
  importance: 0.95,
  tags: ['alert', 'security'],
  ttl: 7 * 24 * 60 * 60 * 1000,  // 7 days
});

recall(query, options?)

Search memories by relevance.

Parameters:

  • query (string) — Search text
  • options.limit (number, default: 10) — Max results
  • options.category (string) — Filter by category
  • options.minImportance (number, default: 0) — Minimum importance
  • options.tags (string[]) — Filter by tags

Returns: Array of matching memories

get(id)

Retrieve memory by ID (increments access count).

Returns: Memory object or null

list(options?)

List all memories with filters.

Options:

  • category (string) — Filter by category
  • tags (string[]) — Filter by tags
  • limit (number, default: 100) — Max returned

Returns: Array of memories sorted by importance then date

update(id, updates)

Update memory fields.

Parameters:

  • id (string) — Memory ID
  • updates (object) — Fields to update

Returns: Updated memory or null if not found

forget(id)

Delete memory by ID.

Returns: Count of forgotten memories (0 or 1)

forget(query, { isQuery: true })

Delete all memories matching query text.

Returns: Count of forgotten memories

stats()

Get session statistics.

Returns: Object with:

  • totalMemories — Count
  • byCategory — Object with counts per category
  • totalAccess — Sum of all accessCount
  • avgImportance — Average importance score
  • storageSize — Human-readable file size

compact()

Remove expired memories and prune if over maxMemories limit.

Returns: Count of removed memories

clear()

Delete all memories in session.

Returns: Boolean success

Environment Variables

  • SESSION_ID — Session identifier (CLI only, default: 'default')

Example:

SESSION_ID=my-agent session-memory list

Use Cases

1. AI Agent with Persistent Memory

const SessionMemory = require('llm-session-memory');
const { Anthropic } = require('@anthropic-ai/sdk');

const memory = new SessionMemory({ sessionId: 'agent-001' });
const anthropic = new Anthropic();

async function chat(userMessage) {
  // Recall relevant memories
  const memories = memory.recall(userMessage, { limit: 5 });
  const context = memories.map(m => `- ${m.text}`).join('\n');
  
  // Build prompt with memory
  const messages = [{
    role: 'user',
    content: `Context:\n${context}\n\nUser: ${userMessage}`,
  }];
  
  // Get response
  const response = await anthropic.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages,
  });
  
  // Store response in memory
  memory.store(
    `User: ${userMessage} | Agent: ${response.content[0].text}`,
    { category: 'conversation', importance: 0.6 }
  );
  
  return response.content[0].text;
}

2. CLI Tool with Memory

#!/bin/bash

# Store fact about system config
session-memory store "Server has 16GB RAM and 8 CPU cores" \
  --category system-config \
  --importance 0.9 \
  --tags server,hardware

# Later: recall when needed
session-memory recall "RAM CPU" --category system-config

3. Multi-Session Architecture

// Different sessions for different contexts
const userMemory = new SessionMemory({ sessionId: 'user-42' });
const systemMemory = new SessionMemory({ sessionId: 'system' });

// Store user preferences separately
userMemory.store('User timezone: UTC+2', { category: 'preference' });

// Store system facts separately
systemMemory.store('Database URL: localhost:5432', { category: 'config' });

4. Automatic Expiration

// Store temporary facts that expire after 1 hour
memory.store(
  'Session token: abc123',
  {
    category: 'session',
    ttl: 60 * 60 * 1000,  // 1 hour
    importance: 0.95,
  }
);

// Later, compact removes expired
memory.compact();

Performance

  • Storage: ~1KB per 100 memories
  • Recall time: <10ms for 1000 memories (simple text search)
  • Save time: <5ms per write (file-based)
  • Tested: 10,000+ memories, persistent sessions

Vector Encoder Integration

Using OpenAI Embeddings

const { Configuration, OpenAIApi } = require('openai');

const openai = new OpenAIApi(new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
}));

const vectorEncoder = async (text) => {
  const response = await openai.createEmbedding({
    model: 'text-embedding-3-small',
    input: text,
  });
  return response.data.data[0].embedding;
};

// Add similarity function
vectorEncoder.similarity = (v1, v2) => {
  // Cosine similarity
  const dotProduct = v1.reduce((sum, a, i) => sum + a * v2[i], 0);
  const mag1 = Math.sqrt(v1.reduce((sum, a) => sum + a * a, 0));
  const mag2 = Math.sqrt(v2.reduce((sum, a) => sum + a * a, 0));
  return dotProduct / (mag1 * mag2);
};

const memory = new SessionMemory({
  sessionId: 'vector-agent',
  vectorEncoder,
});

Troubleshooting

"Cannot find module" Error

npm install -g llm-session-memory

Memories Not Persisting

Check that ~/.llm-memory directory is writable:

ls -la ~/.llm-memory/

Slow Recall with Large Sessions

Compact to reduce memory count:

memory.compact();

Storage Growing Too Large

Reduce maxMemories or increase importance threshold for storage.

License

MIT

Author

botfit ([email protected])

Repository

https://github.com/openclaw/llm-session-memory

Related Tools

  • gateway-health-dashboard — Monitor OpenClaw gateway
  • cron-monitor-js — Track cron job health
  • llm-token-budget — Track token costs
  • openclaw-config-validator — Validate configs

Built for AI agents, LLM applications, and OpenClaw users worldwide 🧠