npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@zensation/algorithms

v0.2.1

Published

Neuroscience-inspired memory algorithms: FSRS spaced repetition, Hebbian learning, Ebbinghaus decay, emotional tagging, Bayesian confidence propagation. Zero dependencies.

Readme

@zensation/algorithms

Neuroscience-inspired memory algorithms for AI agents. Pure TypeScript. Zero dependencies.

npm License TypeScript Zero Dependencies

What's Inside

Seven battle-tested algorithms extracted from a production AI system (170K+ LOC, 9,200+ tests):

| Algorithm | Inspired By | What It Does | |-----------|------------|--------------| | FSRS | Free Spaced Repetition Scheduler | Optimal review scheduling — your AI never forgets what matters | | Ebbinghaus | Ebbinghaus (1885) | Exponential forgetting curves with personalized decay profiles | | Emotional | Amygdala modulation (Cahill & McGaugh, 1998) | Tag memories with arousal/valence/significance — emotional memories decay 3x slower | | Hebbian | Hebb's Rule (1949) | "Neurons that fire together wire together" — co-activation strengthening with homeostatic normalization | | Bayesian | Bayesian belief propagation | Confidence propagation through knowledge graphs | | Context Retrieval | Encoding Specificity (Tulving, 1973) | Context-dependent retrieval boost — memories recalled better in similar contexts | | Similarity | NLP heuristics | Negation detection (EN/DE), Jaccard similarity, text analysis |

Quick Start

npm install @zensation/algorithms
import {
  // FSRS Spaced Repetition
  initFromDecayClass,
  getRetrievability,
  updateAfterRecall,
  scheduleNextReview,

  // Emotional Memory
  tagEmotion,
  computeEmotionalWeight,

  // Hebbian Learning
  computeHebbianStrengthening,
  computeHebbianDecay,

  // Bayesian Confidence
  propagateForRelation,
} from '@zensation/algorithms';

// 1. Create a memory with FSRS scheduling
const memory = initFromDecayClass('normal_decay');
console.log(memory);
// { difficulty: 5, stability: 7, nextReview: Date }

// 2. Check if user can recall it
const retention = getRetrievability(memory);
console.log(`Recall probability: ${(retention * 100).toFixed(1)}%`);

// 3. User recalled it with grade 4 (good)
const updated = updateAfterRecall(memory, 4, retention);
console.log(`Next review: ${updated.nextReview}`);
// Stability increased — next review is further out

// 4. Tag emotional significance
const emotion = tagEmotion('I just got promoted! This is amazing!');
console.log(emotion);
// { sentiment: 0.8+, arousal: 0.7+, valence: 0.9+, significance: 0.85 }

const weight = computeEmotionalWeight(emotion);
console.log(`Decay multiplier: ${weight.decayMultiplier}x`);
// ~2.7x — this memory will decay nearly 3x slower

// 5. Strengthen knowledge graph edges via Hebbian learning
const newWeight = computeHebbianStrengthening(1.0);
// 1.09 — asymptotic growth toward MAX_WEIGHT (10.0)

// 6. Propagate confidence through relations
const newConfidence = propagateForRelation(
  0.5,   // base confidence
  0.8,   // source confidence
  1.0,   // edge weight
  'supports'
);
// 0.9 — supporting evidence increases confidence

Tree-Shakeable Imports

Import only what you need:

// Just FSRS
import { updateAfterRecall, getRetrievability } from '@zensation/algorithms/fsrs';

// Just emotional tagging
import { tagEmotion } from '@zensation/algorithms/emotional';

// Just Hebbian dynamics
import { computeHebbianStrengthening } from '@zensation/algorithms/hebbian';

Why These Algorithms?

FSRS vs SM-2

SM-2 (SuperMemo 2, 1990) uses fixed multipliers. FSRS uses the desirable difficulty principle: reviewing when retention is low gives a bigger stability boost. The result? 30% fewer reviews for the same retention.

Emotional Memory

Human brains consolidate emotional memories more strongly (flashbulb memory effect). This module gives your AI the same capability: memories tagged with high arousal + significance get up to 3x longer decay half-life.

Hebbian Learning

Knowledge graph edges that are frequently co-activated grow stronger. Edges that are never used decay and get pruned. The result is a self-organizing knowledge structure that reflects actual usage patterns.

Context-Dependent Retrieval

Tulving showed that memory recall improves when the retrieval context matches the encoding context. This module captures temporal + task context at encoding time and provides up to a 30% retrieval boost when contexts match.

API Reference

FSRS (@zensation/algorithms/fsrs)

| Function | Description | |----------|-------------| | initFromDecayClass(class, emotionalWeight?) | Create initial state from decay class | | initFromSM2(stability) | Convert SM-2 stability to FSRS state | | getRetrievability(state, now?) | Calculate current recall probability | | scheduleNextReview(state, targetRetention?, now?) | Schedule next optimal review | | updateAfterRecall(state, grade, retrievability, now?) | Update after successful recall (grade 1-5) | | updateAfterForgot(state, retrievability, now?) | Update after failed recall | | updateStabilityCompat(stability, success, multiplier?) | Drop-in SM-2 replacement | | getRetentionProbabilityCompat(lastAccess, stability, multiplier?) | Drop-in Ebbinghaus replacement |

Ebbinghaus (@zensation/algorithms/ebbinghaus)

| Function | Description | |----------|-------------| | calculateRetention(lastAccess, stability, emotionalMultiplier?) | Full retention analysis | | updateStability(stability, success) | SM-2 stability update | | getRepetitionCandidates(facts, threshold?) | Find facts due for review | | calculateOptimalInterval(stability, targetRetention?) | Optimal review interval | | batchCalculateRetention(facts) | Efficient batch retention | | learnDecayProfile(history) | Personalized decay curves | | calculatePersonalizedRetention(lastAccess, stability, profile) | User-specific retention |

Emotional (@zensation/algorithms/emotional)

| Function | Description | |----------|-------------| | tagEmotion(text, contextDomain?) | Multi-dimensional emotion analysis | | computeEmotionalWeight(tag) | Consolidation weight + decay multiplier | | isEmotionallySignificant(text, threshold?) | Quick significance check | | computeContextualValence(text, domain) | Domain-adjusted valence |

Hebbian (@zensation/algorithms/hebbian)

| Function | Description | |----------|-------------| | computeHebbianStrengthening(weight) | Asymptotic edge strengthening | | computeHebbianDecay(weight) | Exponential decay with pruning | | computeHomeostaticNormalization(weights, targetSum) | Normalize weight distribution | | generatePairs(items) | Generate C(n,2) co-activation pairs |

Bayesian (@zensation/algorithms/bayesian)

| Function | Description | |----------|-------------| | propagateForRelation(base, source, weight, type) | Single-edge confidence propagation | | applyDamping(newValue, previousValue) | Blend with previous for stability | | isSignificantChange(newValue, previousValue) | Check if update is worth persisting |

Context Retrieval (@zensation/algorithms/context-retrieval)

| Function | Description | |----------|-------------| | captureEncodingContext(taskType?) | Snapshot current context | | calculateContextSimilarity(encoding, current?) | Context match score + boost | | serializeContext(ctx) / deserializeContext(data) | Storage helpers |

Similarity (@zensation/algorithms/similarity)

| Function | Description | |----------|-------------| | detectNegation(text) | Detect negation with target extraction (EN/DE) | | computeStringSimilarity(a, b) | Jaccard word overlap similarity | | stripNegation(text) | Remove negation words | | safeJsonParse(json, fallback) | Safe JSON parsing with fallback |

Logging

All functions accept an optional Logger parameter. Pass console, your favorite logger, or nothing (silent by default):

import { updateAfterRecall } from '@zensation/algorithms';

// Silent (default)
updateAfterRecall(state, 4, 0.9);

// With logging
updateAfterRecall(state, 4, 0.9, new Date(), console);

Part of ZenBrain

This package is part of the ZenBrain monorepo — the neuroscience-inspired memory system for AI agents.

| Package | Description | |---------|-------------| | @zensation/algorithms | Pure algorithms (this package) | | @zensation/core | Memory layers + coordinator | | @zensation/adapter-postgres | PostgreSQL + pgvector storage | | @zensation/adapter-sqlite | SQLite + sqlite-vec storage |

License

Apache 2.0 — see LICENSE.