npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@aegntic/prologue

v1.1.1

Published

<div align="center">

Readme

                              ████                                       
                             ░░███                                       
 ████████  ████████   ██████  ░███   ██████   ███████ █████ ████  ██████ 
░░███░░███░░███░░███ ███░░███ ░███  ███░░███ ███░░███░░███ ░███  ███░░███
 ░███ ░███ ░███ ░░░ ░███ ░███ ░███ ░███ ░███░███ ░███ ░███ ░███ ░███████ 
 ░███ ░███ ░███     ░███ ░███ ░███ ░███ ░███░███ ░███ ░███ ░███ ░███░░░  
 ░███████  █████    ░░██████  █████░░██████ ░░███████ ░░████████░░██████ 
 ░███░░░  ░░░░░      ░░░░░░  ░░░░░  ░░░░░░   ░░░░░███  ░░░░░░░░  ░░░░░░  
 ░███                                        ███ ░███                    
 █████                                      ░░██████                     
░░░░░                                        ░░░░░░                      

AI Agent Memory Library

Dual-layer memory. First-principles execution. Zero compromise.

Long-term persistence for AI agents — file-based storage with optional knowledge graph, compression ladders, visibility boundaries, and built-in safety frameworks.

MIT License Bun TypeScript Tests



What is Prologue?

Prologue gives AI agents a durable memory system that survives context window resets, session restarts, and process crashes. It's not a vector database wrapper — it's a purpose-built memory architecture with three integrated products:

| Product | Purpose | |---------|---------| | MemoryMatrix | Core memory store — file-based persistence with optional Graphiti knowledge graph. Compression ladders (working → project → overview → core). Visibility boundaries (private → inspectable → shared → canonical). Atomic writes. | | Orchestrator | Task orchestration — spawns agent CLIs, monitors execution, runs post-session analysis (git diff → insight extraction → automatic memory storage). Built-in recovery manager with graduated escalation. | | FPEF v2.0 | First Principles Execution Framework — 4-phase gate enforcement (Find → Prove → Evidence → Fix). Post-hoc output validation. Anti-dishonesty safeguards. Catastrophic failure recovery protocol. |

Quick Start

# Install
bun add prologue

# Or clone for development
git clone https://github.com/aegntic/prologue.git && cd prologue && bun install
import { MemoryMatrix } from "prologue";

// Create a memory store for your project
const memory = await MemoryMatrix.create("/path/to/project");

// Store a memory
const cell = await memory.store("Auth system uses JWT with RS256 signing", {
  visibility: "private",
  compression: "working",
  tags: ["auth", "jwt"],
});
console.log(cell.envelope.id); // "a1b2c3d4-..."

// Query memories
const results = await memory.query("auth token");
console.log(results[0].cell.body.content); // "Auth system uses JWT..."

// Promote to durable storage
await memory.promote(cell.envelope.id, "project");

// Scan envelopes (no body data — visibility-respecting)
const envelopes = await memory.envelopeScan("shared");

Architecture

Memory Compression Ladder

Memories move up through four compression levels as confidence grows:

  WORKING        PROJECT         OVERVIEW          CORE
  (scratchpad)   (dossier)       (atlas)           (biography)
     │               │               │                │
     │   confidence   │  confidence    │  confidence   │
     │    ≥ 0.5       │    ≥ 0.7       │    ≥ 0.9       │
     ▼               ▼               ▼                ▼
  Raw notes    Focused task    Cross-project    Durable truths
  & ephemera   context        awareness

Visibility Boundaries

  PRIVATE ──▶ INSPECTABLE ──▶ SHARED ──▶ CANONICAL
  (agent-only)  (scan only)     (project)    (read-only, global)
  • private — Agent inner monologue, never visible to others
  • inspectable — Visible in envelope scans, body requires permission
  • shared — Available to all agents in the same project scope
  • canonical — Global read-only, requires confidence ≥ 0.9

FPEF v2.0 — First Principles Execution

A 4-phase enforcement framework that prevents agents from jumping to solutions:

| Phase | Name | Rule | |-------|------|------| | 1 | FIND | No solutions allowed. Map reality only. | | 2 | PROVE | Verify every finding with concrete evidence. | | 3 | EVIDENCE | Validate. Test before trust. | | 4 | FIX | Minimal change, maximum impact. |

import { FPEF } from "prologue/fpef";

const fpef = new FPEF({ strict: true });

// Wrap any task with phase enforcement
const prompt = fpef.wrapPrompt("implement auth system");

// Validate agent output for violations
const violations = fpef.validate(agentOutput);
// → [{ phase: "find", violation: "proposed solution before FIND", confidence: 0.95 }]

// Anti-dishonesty checks (6 deterministic rules, no LLM calls)
const checks = fpef.antiDishonesty(agentOutput);
// → [{ check: "no_hope_based", passed: false, violation: "hope-based language: 'should probably work'" }]

// Catastrophic failure recovery (always returns 5 steps)
const steps = fpef.catastrophicFailure({
  classification: "build_failure",
  description: "tsc fails on memory.ts",
  affectedScope: ["src/types/memory.ts"],
});
// → ACKNOWLEDGE → ASSESS → ISOLATE → RECOVER → PREVENT

Orchestrator

Automates the agent lifecycle — run tasks, capture learnings, store memories:

import { Orchestrator } from "prologue/orchestrator";
import { MemoryMatrix } from "prologue";

const memory = await MemoryMatrix.create("./my-project");
const orch = new Orchestrator({ projectDir: "./my-project", memory });

// Run a task — orchestrator spawns Claude Code, then auto-processes the session
await orch.run("refactor auth module to use RS256");
// → After completion: git diff analysis → insight extraction → memory storage

MCP Server

Use prologue as a Claude Code tool server:

// .claude/settings.json
{
  "mcpServers": {
    "prologue": {
      "command": "npx",
      "args": ["prologue-mcp"]
    }
  }
}

Five tools exposed: memory_store, memory_query, memory_promote, memory_envelope_scan, memory_delete.

Python Bridge (Graphiti Integration)

Optional Python bridge provides knowledge graph storage and embedding services:

cd python && uv run python -m src.main
  • Embedding providers: OpenAI (text-embedding-3-small), Ollama (nomic-embed-text), Voyage AI (voyage-3)
  • Knowledge graph: Graphiti for persistent episodic memory
  • Graceful fallback: File-only mode when Python is unavailable

File Storage

{project}/.prologue/
├── config.json
└── memories/
    ├── working/{uuid}.json
    ├── project/{uuid}.json
    ├── overview/{uuid}.json
    ├── core/{uuid}.json
    └── index.json          # All envelopes, bodies stripped

Each memory file is a MemoryCell (envelope + body). Atomic writes via tmp+rename prevent corruption.

API Reference

MemoryMatrix

| Method | Returns | Description | |--------|---------|-------------| | MemoryMatrix.create(projectDir) | Promise<MemoryMatrix> | Static factory, initializes directory structure | | store(content, options?) | Promise<MemoryCell> | Store a new memory | | query(queryText, options?) | Promise<MemoryResult[]> | Search memories (exact match + tags) | | promote(memoryId, targetLevel) | Promise<MemoryCell> | Move memory up compression ladder | | envelopeScan(readerVisibility?) | Promise<MemoryEnvelope[]> | List envelopes respecting visibility | | delete(memoryId) | Promise<void> | Delete a memory |

FPEF

| Method | Returns | Description | |--------|---------|-------------| | wrapPrompt(task, options?) | string | Inject phase instructions around a task | | validate(output) | Violation[] | Post-hoc output validation (deterministic) | | antiDishonesty(output) | DishonestyCheck[] | 6-rule anti-dishonesty checks | | catastrophicFailure(report) | RecoveryStep[] | 5-step failure recovery protocol |

RecoveryManager

| Method | Returns | Description | |--------|---------|-------------| | classifyFailure(error) | FailureClassification | Classify into 5 failure types | | isCircularFix(edit, previous) | boolean | Jaccard similarity > 0.3 detection | | getRecoveryAction(class, attempt, circular) | RecoveryAction | Graduated: retry → analyze → escalate | | isStuck(failureId, attempts) | boolean | True after 3 attempts on same failure |

Development

bun install          # Install dependencies
bun test             # Run all 148 tests
npx tsc --noEmit     # Type check
bunx biome check .   # Lint
bunx biome format . --write  # Format

Testing

148 tests across 12 files covering:

  • MemoryMatrix — store, query, promote, file persistence, atomic writes, envelope scan isolation
  • Orchestrator — task execution, post-session pipeline, git analysis, insight extraction, recovery manager
  • FPEF — phase gates, output validation, anti-dishonesty (6 rules), catastrophic failure protocol
  • Integration — 7 end-to-end scenarios from the spec
bun test                        # All tests
bun test test/matrix/           # Memory matrix only
bun test test/orchestrator/     # Orchestrator only
bun test test/fpef/             # FPEF only
bun test test/integration/      # E2E only

Dependencies

Zero runtime dependencies beyond Zod:

| Package | Version | Purpose | |---------|---------|---------| | zod | ^4.3.6 | Runtime type validation |

Dev dependencies:

| Package | Purpose | |---------|---------| | @types/bun | Bun runtime types | | typescript | ^5.9.0 |

Tech Stack

  • Runtime: Bun (TypeScript strict mode)
  • Validation: Zod v4 (schemas + types, single source of truth)
  • Testing: bun:test
  • Linting: Biome
  • Python Bridge: uv + Pydantic + optional Graphiti/OpenAI
  • MCP: stdio transport (Claude Code compatible)

Project Structure

prologue/
├── src/
│   ├── index.ts              # Barrel exports (public API)
│   ├── types/                # Zod schemas + TypeScript types
│   ├── matrix/               # MemoryMatrix, file store, search, promotion
│   ├── orchestrator/         # Task orchestration, recovery, git analysis
│   ├── fpef/                 # FPEF v2.0, phase gates, validation, anti-dishonesty
│   ├── bridge/               # Python bridge (TS side)
│   └── mcp/                  # MCP server (Claude Code integration)
├── python/                   # Python bridge (Graphiti + embeddings)
├── test/                     # 148 tests across 12 files
├── CLAUDE.md                 # Claude Code agent instructions
├── PROJECT-SPEC.md           # Single source of truth specification
└── PHASE-TASKS.yaml          # Swarm execution task decomposition

Built with precision by aegntic.ai

a division of ae.ltd

Autonomous systems. First principles. No compromise.