npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

simple-engram

v0.4.0

Published

Plug-and-play memory engine for AI agents — one import, any LLM, any storage, and your agent never forgets

Downloads

263

Readme

Simple Engram

Zero-dependency memory engine for AI agents — One import, any LLM, any storage, and your agent never forgets.

npm install simple-engram

NPM Version License: MIT Zero Dependencies

Agentic System Integration Guide — Complete workflow for integrating Engram into AI agents


What is Engram?

Engram gives your AI agent a persistent, intelligent memory that automatically:

  • ✅ Extracts important facts from conversations
  • ✅ Detects and filters duplicates (no redundancy)
  • ✅ Forgets old/unimportant information naturally
  • ✅ Recalls relevant memories when needed

Zero configuration required. Bring your own LLM (OpenAI, Anthropic, Ollama, etc.) and start using memory in 3 lines of code.


Quick Start

import { Engram } from 'simple-engram';

// 1. Bring your LLM (any provider works!)
const llm = async (prompt: string) => {
  // Your LLM call here
  return response;
};

// 2. Create memory engine
const mem = new Engram({ llm });

// 3. Extract & store memories from conversations
await mem.remember([
  { role: 'user', content: 'I prefer TypeScript over JavaScript' },
  { role: 'assistant', content: 'Got it! I'll use TypeScript.' }
]);

// 4. Recall relevant memories
const memories = await mem.recall('coding preferences');
console.log(memories[0].content);
// → "User prefers TypeScript over JavaScript"

// 5. Format as context for your agent
const context = await mem.context('preferences', { format: 'bullets' });
// → "- User prefers TypeScript over JavaScript [preference]"

Why Use Engram?

🎯 Built-in Intelligence

  • Surprise Detection — Only stores novel information (no duplicates)
  • Memory Decay — Ebbinghaus forgetting curve with access frequency boost
  • Smart Retrieval — Semantic search with configurable ranking weights
  • Auto-merging — Combines similar memories automatically

🔌 Bring Your Own Everything

  • BYOLLM — Works with any LLM (OpenAI, Anthropic, Ollama, Groq, etc.)
  • BYOE — Optional embeddings for 10x better search (OpenAI, Voyage, local models)
  • BYOS — In-memory or SQLite storage (custom adapters supported)

🪶 Zero Dependencies

  • No external packages in production
  • No network calls (you control the LLM/embeddings)
  • Works offline with local models

🧩 Simple API

await mem.remember(messages);  // Store memories
await mem.recall(query);       // Retrieve memories
await mem.forget();            // Prune old memories
await mem.context(query);      // Format for LLM prompts

When NOT to Use Engram

Don't use if you need:

  • Full conversation history (use a database instead)
  • Exact verbatim recall (Engram extracts facts, not transcripts)
  • Real-time streaming (memory extraction is async)
  • Complex queries (Engram is simple keyword/semantic search, not SQL)

Use Engram when you want:

  • Long-term memory across sessions
  • Automatic deduplication and summarization
  • Natural forgetting of outdated info
  • Personalization without managing a database

Cross-Session Memory

Engram automatically loads and compares against existing memories across process restarts.

import { Engram, SqliteStore } from 'simple-engram';

// Session 1
const memory = new Engram({
  llm,
  store: new SqliteStore({ path: './memory.db' })
});
await memory.init(); // Load existing

await memory.remember([{ role: 'user', content: 'I prefer TypeScript' }]);
await memory.close();

// ===== Process restarts =====

// Session 2 (hours/days later)
const memory2 = new Engram({
  llm,
  store: new SqliteStore({ path: './memory.db' }) // Same path!
});
await memory2.init(); // Loads "I prefer TypeScript" from Session 1

await memory2.remember([{ role: 'user', content: 'I also like Python' }]);
// ✅ Compares against TypeScript preference from Session 1
// ✅ Cross-session memory works automatically!

Key Points:

  • init() loads all existing memories from storage
  • New memories are compared against ALL existing memories
  • Works with SqliteStore and JsonFileStore (not MemoryStore)

Code Examples

OpenAI (Cloud)

import OpenAI from 'openai';
import { Engram } from 'simple-engram';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const mem = new Engram({
  llm: async (prompt) => {
    const response = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [{ role: 'user', content: prompt }],
    });
    return response.choices[0].message.content;
  }
});

await mem.remember([
  { role: 'user', content: 'My name is Alice and I love hiking' }
]);

const memories = await mem.recall('Alice');
// → [{ content: "User's name is Alice", category: "fact", ... }]

Anthropic Claude

import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

const mem = new Engram({
  llm: async (prompt) => {
    const response = await anthropic.messages.create({
      model: 'claude-3-5-sonnet-20241022',
      max_tokens: 1024,
      messages: [{ role: 'user', content: prompt }],
    });
    return response.content[0].text;
  }
});

Any LLM Provider

// Works with Ollama, LM Studio, or any local/cloud LLM
const mem = new Engram({
  llm: async (prompt) => {
    // Use your preferred LLM library here
    const response = await yourLLM.generate({ prompt });
    return response.text;
  }
});

With Embeddings (10x Better Search)

import OpenAI from 'openai';

const openai = new OpenAI();

const mem = new Engram({
  llm: async (prompt) => {
    const response = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [{ role: 'user', content: prompt }],
    });
    return response.choices[0].message.content;
  },
  embed: async (text) => {
    const response = await openai.embeddings.create({
      model: 'text-embedding-3-small',
      input: text,
    });
    return response.data[0].embedding;
  }
});

await mem.remember([
  { role: 'user', content: 'I use Docker for containerization' }
]);

// Semantic search (finds "Docker" even if you search "containers")
const memories = await mem.recall('containerization tools');

Configuration Options

const mem = new Engram({
  llm,                           // Required: Your LLM function
  embed,                         // Optional: Embedding function (recommended!)

  // Surprise detection
  surpriseThreshold: 0.15,       // Novelty threshold (0-1, lower = more selective)

  // Memory decay
  decayHalfLifeDays: 30,         // Half-life for importance decay
  maxRetentionDays: 90,          // Max age before auto-deletion

  // Retrieval tuning
  defaultK: 5,                   // Number of memories to recall
  retrievalWeights: {            // Customize ranking
    relevance: 0.5,              // How well query matches content
    importance: 0.3,             // Base importance with decay
    recency: 0.2,                // How recent the memory is
    accessFrequency: 0.0,        // How often accessed
  },

  // Storage
  store: new MemoryStore(),      // In-memory (default) or SqliteStore
  maxMemories: 10000,            // Hard limit on memory count
});

Documentation

📚 Detailed Guides:


License

MIT © Vaisakh


Contributing

Issues and PRs welcome! See CONTRIBUTING.md for guidelines.