npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@kognitivedev/memory

v0.2.28

Published

Reusable memory orchestration, clients, and adapter contracts for Kognitive

Downloads

320

Readme

@kognitivedev/memory

Workflow-based long-term memory for Kognitive.

@kognitivedev/memory packages Kognitive's existing memory behavior into a reusable core that can run in a backend, a CLI, a benchmark harness, or another integration surface without hard-coding Postgres, HTTP, or a specific model provider.

Quick Start · Why This Exists · Architecture · Adapters · CLI · Benchmarks · Migration

Why This Exists

Most memory systems pick one of two extremes:

  • store everything and retrieve later
  • summarize aggressively and throw away the underlying conversation

Kognitive does neither. The package keeps the existing product behavior:

  • conversation logs are ingested first
  • extractors propose candidate memories
  • a manager decides create, update, and delete operations
  • compaction only runs under token pressure
  • prompt snapshots are regenerated as the serving surface for downstream runtimes

That gives you a memory system that stays opinionated at the semantic layer, while remaining decoupled at the storage, transport, and orchestration layers.

What You Get

  • A reusable in-process MemoryService
  • A workflow-defined processing pipeline built on @kognitivedev/workflows
  • Adapter contracts for storage, locking, context resolution, and model execution
  • An HTTP MemoryClient for remote snapshot access and processing triggers
  • The same core package used by backend routes, CLI commands, and the benchmark harness

Quick Start

Install the package:

bun add @kognitivedev/memory

Create a memory service:

import { MemoryService } from "@kognitivedev/memory";

const memory = new MemoryService({
  storage,
  agent,
  lock,
  contextResolver,
  maxMemories: 100,
  maxTokenLimit: 4000,
});

await memory.logConversation({
  userId: "user-1",
  projectId: "project-uuid",
  sessionId: "session-1",
  messages: [
    { role: "user", content: "Acme wants a migration plan before rollout." },
    {
      role: "assistant",
      content: "I'll prepare the rollout and migration plan.",
    },
  ],
});

await memory.processMemoryJob("user-1", "project-uuid", "session-1");

const snapshot = await memory.getSnapshot("user-1", "project-uuid");
console.log(snapshot?.userContextBlock);

Use the remote client:

import { MemoryClient } from "@kognitivedev/memory";

const client = new MemoryClient({
  baseUrl: "http://localhost:3001",
  apiKey: process.env.KOGNITIVE_API_KEY,
});

const snapshot = await client.getSnapshot("user-1", { topicMode: "full" });
const memoryBlock = snapshot ? client.buildMemoryBlock(snapshot) : "";

Architecture

flowchart LR
    A["Conversation Logs"] --> B["Memory Workflow"]
    B --> C["Extractor"]
    C --> D["Manager"]
    D --> E["Storage Adapter"]
    E --> F["Snapshot Regeneration"]
    F --> G["Snapshot / Memory Block"]
    H["Lock Adapter"] --> B
    I["Context Resolver"] --> B
    J["CLI / Backend / Benchmarks / Remote Client"] --> G

The package is split intentionally:

  • Semantic behavior lives here.
  • Environment wiring stays outside.

That means the backend can keep ownership of auth, DB clients, distributed locks, and project resolution, while the package owns the memory algorithm and the workflow that executes it.

How The Pipeline Works

The default workflow created by createMemoryProcessingWorkflow() runs these steps:

  1. cleanup-expired
  2. load-pending-logs
  3. extract-candidates
  4. manage-candidates
  5. apply-operations
  6. compact-if-needed
  7. regenerate-snapshot

This is Kognitive's current behavior, extracted into a package rather than redesigned into a different memory model.

Main APIs

MemoryService

MemoryService is the primary in-process integration surface.

It composes:

  • StorageAdapter
  • AgentAdapter
  • LockAdapter
  • optional ContextResolver
  • optional cache/logger configuration

Use it when your app wants to own memory processing locally.

createMemoryProcessingWorkflow()

This exposes the processing loop as a first-class workflow. Use it when you want workflow-level visibility, custom runners, or integration with the rest of the Kognitive workflow stack.

MemoryClient

MemoryClient is the remote integration surface.

Use it when:

  • another package needs prompt-ready memory blocks
  • a CLI or external app needs to fetch snapshots
  • you want to trigger processing over HTTP instead of linking storage directly

Adapter Contracts

The package is intentionally adapter-driven.

| Contract | Responsibility | | ----------------- | -------------------------------------------------------------- | | StorageAdapter | logs, memories, snapshots, transactions, limit enforcement | | AgentAdapter | extraction, management, and compaction decisions | | LockAdapter | prevent duplicate processing for the same user/project/session | | ContextResolver | resolve richer processing context before model execution |

This is the key design choice. Postgres is one adapter. A backend API is one transport. Neither is the architecture.

Design Principles

Functional core, thin composition roots

The package owns the memory semantics. Apps own infrastructure.

Eventual consistency

Memory is processed from logged conversations, not inline with every generation.

Snapshot-first serving

Downstream prompt consumers read regenerated snapshots, not arbitrary tables.

Compaction as pressure relief, not default behavior

The system preserves memory until token pressure requires compression.

Storage independence

You can keep the current Postgres setup, but the package does not require it.

Integration Patterns

Backend composition

Use MemoryService plus adapters over your real DB, cache, locks, and model runtime.

CLI composition

Use MemoryClient when the CLI should talk to a running backend.

Benchmark composition

Use @kognitivedev/memory-bench to benchmark a memory runtime built on this package.

Test composition

Use in-memory adapters to validate extraction, management, snapshotting, and workflow behavior without standing up the full app.

CLI

The workspace CLI exposes the package through @kognitivedev/cli.

kognitive memory snapshot \
  --user-id user-1 \
  --base-url http://localhost:3001 \
  --api-key $KOGNITIVE_API_KEY

kognitive memory snapshot \
  --user-id user-1 \
  --topic-mode full \
  --json

kognitive memory process \
  --user-id user-1 \
  --session-id session-1 \
  --base-url http://localhost:3001 \
  --api-key $KOGNITIVE_API_KEY

Why This Design Wins

  • It preserves Kognitive's current memory behavior instead of forcing a new product model.
  • It makes storage pluggable without pretending memory semantics should be generic.
  • It lets benchmarks and integrations use the same core instead of reimplementing the pipeline.
  • It exposes the pipeline as a workflow, which makes the processing stages inspectable and testable.
  • It cleanly separates infrastructure concerns from memory behavior.

Benchmarks

The benchmark path now composes over this package through @kognitivedev/memory-bench.

Latest checked-in smoke run:

| Dataset | Adapter | Model | Consolidation | Cases | Local Score | Exact | Token F1 | Abstention Accuracy | Avg Latency | | -------------------- | ------------------ | -------------------- | ----------------- | ----: | ----------: | ----: | -------: | ------------------: | ----------: | | longmemeval-sample | kognitive-direct | x-ai/grok-4.1-fast | before-question | 2 | 0.773 | 0.000 | 0.344 | 1.000 | 510.5 ms |

See:

Migration From The Backend-Coupled Implementation

The migration path is designed to avoid data loss:

  1. Keep the existing memory tables and data model.
  2. Move orchestration and contracts into @kognitivedev/memory.
  3. Leave database access in adapters owned by the composing app.
  4. Prefer schema migrations over destructive schema pushes.

For local setup and benchmark preparation, use:

bun run db:migrate

Do not assume db:push is safe on a populated local database.

Related Packages