npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@cartisien/engram

v2.1.0

Published

Persistent semantic memory for AI agents — SQLite-backed, FTS5 + vector hybrid search, optional Qdrant.

Readme

@cartisien/engram

Persistent semantic memory for AI agents. Store, search, and retrieve conversational memory with embeddings, graph relationships, and hybrid search.

Features

  • Persistent Storage: SQLite-backed with WAL mode for reliability
  • Semantic Search: Vector similarity using embeddings (OpenAI or Ollama)
  • FTS5 Keyword Search: Full-text search with BM25 ranking
  • Hybrid Scoring: Combines semantic, keyword, importance, and recency
  • Graph Memory: Extract and query entity relationships
  • Multi-hop Traversal: Find paths between entities
  • User-Scoped Memory: Cross-session memory persistence
  • Embedding Cache: LRU cache to reduce API calls
  • Batch Operations: Efficient batch embedding
  • Deduplication: Automatic duplicate detection and merging
  • Memory Tiers: Working, long-term, and archived tiers
  • Consolidation: Automatic summarization of old memories

Installation

npm install @cartisien/engram

Quick Start

Pick an embedding backend. OpenAI needs no local infra:

import { Engram } from '@cartisien/engram';

const engram = Engram.openai({
  apiKey: process.env.OPENAI_API_KEY,
  dbPath: './memory.db',
});

await engram.init();

// Store a memory
await engram.remember('session-123', 'The user prefers TypeScript', 'user');

// Recall memories
const memories = await engram.recall('session-123', 'programming preferences');

Self-hosted alternative with Ollama:

const engram = Engram.local({ dbPath: './memory.db' });
// or: Engram.local({ url: 'http://localhost:11434', model: 'nomic-embed-text' });

Both presets are thin wrappers over new Engram({...}). Use the constructor directly when you need to tune cache size, graph memory, reranking, or Qdrant backend.

Provider is auto-detected from the model name — anything starting with text-embedding- is routed to OpenAI, everything else to Ollama. Set embeddingProvider: 'openai' | 'ollama' to override.

Configuration

const engram = new Engram({
  // Database
  dbPath: './memory.db',           // SQLite file path
  enableWAL: true,                 // Write-Ahead Logging
  
  // Embeddings
  embeddingUrl: 'http://localhost:11434',
  embeddingModel: 'nomic-embed-text',
  semanticSearch: true,
  embeddingCacheSize: 1000,        // LRU cache size
  embeddingBatchSize: 10,          // Batch size for embeddings
  
  // Search
  enableFTS5: true,                // Full-text search
  dedupThreshold: 0.95,            // Duplicate detection threshold
  
  // Scoring
  enableImportanceScoring: false,  // Auto-calculate importance
  recencyHalfLifeDays: 30,         // Recency decay half-life
  
  // Graph
  graphMemory: false,              // Extract graph relationships
  graphMaxDepth: 3,                // Max traversal depth
  
  // Consolidation
  autoConsolidate: false,
  consolidateThreshold: 100,
  consolidateKeep: 20,
});

Core Operations

Store Memories

// Session-scoped memory
await engram.remember(
  'session-123',           // session ID
  'User mentioned they like hiking',
  'user',                  // role: 'user' | 'assistant' | 'system'
  { source: 'chat' }       // optional metadata
);

// User-scoped memory (cross-session)
await engram.rememberUser(
  'user-456',
  'Preferences: dark mode, notifications off',
  'system'
);

Recall Memories

// Basic recall (recent memories)
const recent = await engram.recall('session-123');

// Semantic search
const relevant = await engram.recall('session-123', 'outdoor activities', {
  limit: 5,
  threshold: 0.7,
});

// With filters
const filtered = await engram.recall('session-123', 'hiking', {
  role: 'user',
  tiers: ['working', 'long_term'],
  before: new Date('2024-01-01'),
});

// Iterator for large result sets
for await (const memory of engram.recallIter('session-123', 'query')) {
  console.log(memory.content);
}

Graph Operations

// Store relationship
await engram.storeEdge(
  'session-123',
  'Alice',           // from entity
  'knows',           // relation
  'Bob',             // to entity
  1.0,               // confidence
  memory.id          // optional source memory
);

// Query entity
const graph = await engram.graph('session-123', 'Alice');
console.log(graph.relationships);

// Find path between entities
const path = await engram.graphPath(
  'session-123',
  'Alice',
  'Charlie',
  3                  // max depth
);
if (path.found) {
  console.log(`Path found with ${path.hops} hops`);
}

Memory Management

// Get session stats
const stats = await engram.stats('session-123');
console.log(`${stats.total} memories, ${stats.graphNodes} entities`);

// Consolidate old memories
const result = await engram.consolidate('session-123', {
  keep: 20,          // preserve recent memories
  dryRun: true,      // preview only
});

// Delete specific memory
await engram.forget('session-123', { id: memoryId });

// Delete old memories
await engram.forget('session-123', {
  before: new Date('2024-01-01'),
  includeLongTerm: false,
});

v0.8 Subsystems

Skills, collision-based insights, and the wiki/digest layer hang off namespace accessors rather than flat methods on Engram:

// Skills
const hits = await engram.skills.search('refund policy', 5);

// Collision engine — generate + retrieve insights, adjust strategy weights
const insights = await engram.collide('session-123');        // fetch + run
const prior    = await engram.collision.getInsights('session-123');
await engram.rateInsight(insights[0].id, 5);                 // updates weights
const weights  = engram.collision.getWeights();

// Wiki / digests
const digest  = await engram.wiki.create({ slug: 'billing', subtype: 'topic', body: '...' });
const results = await engram.wiki.search('billing', 10);
const current = await engram.wiki.getBySlug('billing');
const next    = await engram.wiki.refresh('billing', 'updated body', 0.8);
const history = await engram.wiki.history('billing');

// Wiki-adjacent free-function helpers stay on Engram
await engram.lint({ bootstrap: true });
const md = await engram.buildIndex();
await engram.appendLog('digest.refresh', 'billing updated');
const log = await engram.queryLog({ type: 'digest.refresh', limit: 50 });

Migration from ≤v0.7.x: the flat wrappers createDigest, searchDigests, getDigest, refreshDigest, digestHistory, getInsights, getStrategyWeights, and recallWithSkills were removed. Use the subsystem accessors shown above.

Hybrid Search

Engram combines multiple signals for ranking:

// Default weights
const weights = {
  semantic: 0.5,    // Vector similarity
  keyword: 0.25,    // FTS5 BM25 score
  importance: 0.15, // Memory importance (0-1)
  recency: 0.1,     // Time decay
};

const memories = await engram.recall('session-123', 'machine learning', {
  applyDecay: true,
});

The combinedScore is normalized (0-1) and memories are sorted by this score.

Caching

// Embedding cache (LRU)
const cache = new EmbeddingCache({ maxSize: 1000 });
cache.set(contentHash, embedding);
const cached = cache.get(contentHash);

// Check stats
const stats = cache.getStats();
console.log(`Hit rate: ${(stats.hitRate * 100).toFixed(1)}%`);

Deduplication

import { findSimilarInList, deduplicateMemories } from '@cartisien/engram';

// Find similar memories
const similar = findSimilarInList(queryEmbedding, memories, 0.95);

// Deduplicate a list
const unique = deduplicateMemories(memories, 0.95);

Batch Operations

// Batch embed multiple texts
const embeddings = await engram.embedBatch([
  'text 1',
  'text 2',
  'text 3',
]);

// Manual batch embedding
import { embedBatch } from '@cartisien/engram';
const results = await embedBatch(
  texts,
  embedFn,
  10  // batch size
);

TypeScript Types

import type {
  MemoryEntry,
  UserMemoryEntry,
  MemoryTier,
  MemoryRole,
  RecallOptions,
  GraphResult,
  GraphPathResult,
  SessionStats,
  EngramConfig,
} from '@cartisien/engram';

Testing

npm test              # Run all tests
npm run test:watch    # Watch mode

Architecture

src/
├── index.ts              # Main Engram class
├── types.ts              # TypeScript types
├── cache/
│   └── embedding-cache.ts # LRU embedding cache
├── search/
│   ├── fts5.ts           # Full-text search
│   └── hybrid.ts         # Hybrid scoring
├── utils/
│   ├── batch.ts          # Batch embedding
│   └── dedup.ts          # Deduplication
└── graph/
    └── traversal.ts      # Graph operations

Requirements

  • Node.js 18+
  • SQLite 3 with FTS5 support
  • Ollama (optional, for embeddings)

License

MIT

Changelog

v0.8.0 (unreleased)

  • Breaking: removed flat v0.8 pass-throughs (createDigest, searchDigests, getDigest, refreshDigest, digestHistory, getInsights, getStrategyWeights, recallWithSkills). Use engram.wiki.*, engram.collision.*, engram.skills.search() instead
  • Added Engram.openai({ apiKey }) and Engram.local() preset factories
  • Added OpenAI embedding provider (/v1/embeddings); auto-detected from models starting with text-embedding-
  • Added embeddingProvider and embeddingApiKey config fields
  • Added automatic legacy-schema migration in init() (backfills tier, importance, consolidated_from, created_at on pre-v0.7 DBs)

v0.7.0

  • Added embedding cache (LRU)
  • Added batch embedding support
  • Added FTS5 keyword search
  • Added hybrid search with recency decay
  • Added importance scoring
  • Added graph memory and multi-hop traversal
  • Added deduplication utilities
  • Added user-scoped memory
  • Added WAL mode for SQLite
  • Improved consolidation