npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

forgetai

v0.1.0

Published

Memory with forgetting for AI applications. Memories decay, expire, and get pruned — just like human memory.

Readme

forgetai

Memory with forgetting for AI applications.

"AI has photographic memory but no ability to forget. And forgetting is essential." -- Andrej Karpathy

AI agents remember everything. Every user preference, every stray question, every one-off request -- all stored with equal weight forever. This creates real problems:

  • Stereotyping from one data point. User asks about crypto once, now every response mentions blockchain.
  • Context pollution. Stale memories crowd out relevant ones, degrading retrieval quality.
  • Unbounded growth. Memory stores balloon until they're slow, expensive, or both.

Human memory doesn't work this way. We forget things. Unimportant details fade. Old information expires. And that's a feature, not a bug.

forgetai gives your AI the ability to forget.

Install

npm install forgetai

Quick Start

import { Memory } from 'forgetai';

const mem = new Memory({ ttl: '7d', maxSize: 1000 });

await mem.remember('user prefers dark mode', { importance: 0.9 });
const result = await mem.recall('what theme does the user like?');
// { text: 'user prefers dark mode', similarity: 0.92, importance: 0.9, age: '2h' }

Three lines. Your AI now has human-like memory.

How It Works

Every memory has three properties that determine its lifespan:

  1. TTL (time-to-live) -- memories expire after a set duration, just like short-term memory fading.
  2. Importance (0-1) -- how critical this memory is. High-importance memories survive longer.
  3. Decay -- importance decreases over time. Memories you never access fade away.

Retrieval uses semantic similarity -- you don't need the exact words, just the gist. Under the hood, forgetai uses character n-gram vectors for zero-dependency similarity matching. For production, swap in OpenAI embeddings or any vector provider.

API

new Memory(options?)

const mem = new Memory({
  ttl: '7d',           // default TTL for new memories
  maxSize: 1000,       // max memories before eviction
  decayRate: 0.95,     // importance *= 0.95 each prune cycle
  threshold: 0.85,     // minimum similarity for recall
  embed: customFn,     // optional: custom embedding function
});

mem.remember(text, options?)

Store a memory.

await mem.remember('user prefers dark mode', {
  ttl: '30d',          // override default TTL
  importance: 0.9,     // 0-1, default 0.5
  meta: { source: 'settings' },  // optional metadata
});

mem.recall(query, options?)

Find the most relevant memory by semantic similarity.

const result = await mem.recall('what theme does the user like?');
// {
//   text: 'user prefers dark mode',
//   similarity: 0.92,
//   importance: 0.9,
//   age: '2h',
//   accessCount: 3,
//   meta: { source: 'settings' }
// }

Returns null if nothing meets the similarity threshold.

mem.recallMany(query, options?)

Retrieve multiple relevant memories.

const results = await mem.recallMany('user preferences', {
  threshold: 0.5,
  limit: 5,
});

mem.forget(query)

Fuzzy-remove memories matching the query. Forgetting is aggressive -- it uses a lower similarity threshold to catch related memories.

const removed = await mem.forget('crypto');
// Removes anything related to crypto

mem.prune()

Apply decay and remove expired memories. Call this periodically (e.g., every N interactions, or on a timer).

const { expired, decayed } = mem.prune();
// expired: removed because TTL elapsed
// decayed: removed because importance fell below minimum

mem.expiring(within)

See what's about to expire.

const expiring = mem.expiring('24h');
// [{ text, importance, ttlRemaining, age, status, accessCount }]

mem.inspect()

Full health check of all memories.

const all = mem.inspect();
// [{ text, importance, ttlRemaining, age, status: 'active'|'decaying'|'expiring', accessCount }]

mem.export() / Memory.from(snapshot)

Persist and restore memory state.

// Save
const snapshot = mem.export();
fs.writeFileSync('memory.json', JSON.stringify(snapshot));

// Restore
const data = JSON.parse(fs.readFileSync('memory.json', 'utf8'));
const restored = Memory.from(data);

Advanced: Custom Embeddings

The built-in n-gram vectorizer works surprisingly well for short text and requires zero dependencies. For production workloads with longer text or multilingual content, plug in real embeddings:

import { Memory } from 'forgetai';
import OpenAI from 'openai';

const openai = new OpenAI();

const mem = new Memory({
  threshold: 0.85,
  embed: async (text) => {
    const res = await openai.embeddings.create({
      model: 'text-embedding-3-small',
      input: text,
    });
    // Convert dense array to Map for forgetai's vector format
    const vec = new Map<string, number>();
    res.data[0].embedding.forEach((v, i) => vec.set(String(i), v));
    return vec;
  },
});

Why Not Just Use an Array?

| Feature | Array / Map | forgetai | |---------|-----------------|------------| | TTL expiry | Manual | Built-in | | Semantic search | No | Yes | | Importance decay | No | Automatic | | Size limits | Manual | Automatic eviction | | Fuzzy forget | No | Yes | | Persistence | Manual | export() / from() |

Zero Dependencies

forgetai has no runtime dependencies. The built-in similarity engine uses character n-gram vectors and cosine similarity -- the same approach used in production semantic caches. TypeScript types included.

License

MIT

Built by

Freedom Engineers -- forgetai on GitHub