@kognitivedev/memory
v0.2.28
Published
Reusable memory orchestration, clients, and adapter contracts for Kognitive
Downloads
320
Maintainers
Readme
@kognitivedev/memory
Workflow-based long-term memory for Kognitive.
@kognitivedev/memory packages Kognitive's existing memory behavior into a reusable core that can run in a backend, a CLI, a benchmark harness, or another integration surface without hard-coding Postgres, HTTP, or a specific model provider.
Quick Start · Why This Exists · Architecture · Adapters · CLI · Benchmarks · Migration
Why This Exists
Most memory systems pick one of two extremes:
- store everything and retrieve later
- summarize aggressively and throw away the underlying conversation
Kognitive does neither. The package keeps the existing product behavior:
- conversation logs are ingested first
- extractors propose candidate memories
- a manager decides create, update, and delete operations
- compaction only runs under token pressure
- prompt snapshots are regenerated as the serving surface for downstream runtimes
That gives you a memory system that stays opinionated at the semantic layer, while remaining decoupled at the storage, transport, and orchestration layers.
What You Get
- A reusable in-process
MemoryService - A workflow-defined processing pipeline built on
@kognitivedev/workflows - Adapter contracts for storage, locking, context resolution, and model execution
- An HTTP
MemoryClientfor remote snapshot access and processing triggers - The same core package used by backend routes, CLI commands, and the benchmark harness
Quick Start
Install the package:
bun add @kognitivedev/memoryCreate a memory service:
import { MemoryService } from "@kognitivedev/memory";
const memory = new MemoryService({
storage,
agent,
lock,
contextResolver,
maxMemories: 100,
maxTokenLimit: 4000,
});
await memory.logConversation({
userId: "user-1",
projectId: "project-uuid",
sessionId: "session-1",
messages: [
{ role: "user", content: "Acme wants a migration plan before rollout." },
{
role: "assistant",
content: "I'll prepare the rollout and migration plan.",
},
],
});
await memory.processMemoryJob("user-1", "project-uuid", "session-1");
const snapshot = await memory.getSnapshot("user-1", "project-uuid");
console.log(snapshot?.userContextBlock);Use the remote client:
import { MemoryClient } from "@kognitivedev/memory";
const client = new MemoryClient({
baseUrl: "http://localhost:3001",
apiKey: process.env.KOGNITIVE_API_KEY,
});
const snapshot = await client.getSnapshot("user-1", { topicMode: "full" });
const memoryBlock = snapshot ? client.buildMemoryBlock(snapshot) : "";Architecture
flowchart LR
A["Conversation Logs"] --> B["Memory Workflow"]
B --> C["Extractor"]
C --> D["Manager"]
D --> E["Storage Adapter"]
E --> F["Snapshot Regeneration"]
F --> G["Snapshot / Memory Block"]
H["Lock Adapter"] --> B
I["Context Resolver"] --> B
J["CLI / Backend / Benchmarks / Remote Client"] --> GThe package is split intentionally:
- Semantic behavior lives here.
- Environment wiring stays outside.
That means the backend can keep ownership of auth, DB clients, distributed locks, and project resolution, while the package owns the memory algorithm and the workflow that executes it.
How The Pipeline Works
The default workflow created by createMemoryProcessingWorkflow() runs these steps:
cleanup-expiredload-pending-logsextract-candidatesmanage-candidatesapply-operationscompact-if-neededregenerate-snapshot
This is Kognitive's current behavior, extracted into a package rather than redesigned into a different memory model.
Main APIs
MemoryService
MemoryService is the primary in-process integration surface.
It composes:
StorageAdapterAgentAdapterLockAdapter- optional
ContextResolver - optional cache/logger configuration
Use it when your app wants to own memory processing locally.
createMemoryProcessingWorkflow()
This exposes the processing loop as a first-class workflow. Use it when you want workflow-level visibility, custom runners, or integration with the rest of the Kognitive workflow stack.
MemoryClient
MemoryClient is the remote integration surface.
Use it when:
- another package needs prompt-ready memory blocks
- a CLI or external app needs to fetch snapshots
- you want to trigger processing over HTTP instead of linking storage directly
Adapter Contracts
The package is intentionally adapter-driven.
| Contract | Responsibility |
| ----------------- | -------------------------------------------------------------- |
| StorageAdapter | logs, memories, snapshots, transactions, limit enforcement |
| AgentAdapter | extraction, management, and compaction decisions |
| LockAdapter | prevent duplicate processing for the same user/project/session |
| ContextResolver | resolve richer processing context before model execution |
This is the key design choice. Postgres is one adapter. A backend API is one transport. Neither is the architecture.
Design Principles
Functional core, thin composition roots
The package owns the memory semantics. Apps own infrastructure.
Eventual consistency
Memory is processed from logged conversations, not inline with every generation.
Snapshot-first serving
Downstream prompt consumers read regenerated snapshots, not arbitrary tables.
Compaction as pressure relief, not default behavior
The system preserves memory until token pressure requires compression.
Storage independence
You can keep the current Postgres setup, but the package does not require it.
Integration Patterns
Backend composition
Use MemoryService plus adapters over your real DB, cache, locks, and model runtime.
CLI composition
Use MemoryClient when the CLI should talk to a running backend.
Benchmark composition
Use @kognitivedev/memory-bench to benchmark a memory runtime built on this package.
Test composition
Use in-memory adapters to validate extraction, management, snapshotting, and workflow behavior without standing up the full app.
CLI
The workspace CLI exposes the package through @kognitivedev/cli.
kognitive memory snapshot \
--user-id user-1 \
--base-url http://localhost:3001 \
--api-key $KOGNITIVE_API_KEY
kognitive memory snapshot \
--user-id user-1 \
--topic-mode full \
--json
kognitive memory process \
--user-id user-1 \
--session-id session-1 \
--base-url http://localhost:3001 \
--api-key $KOGNITIVE_API_KEYWhy This Design Wins
- It preserves Kognitive's current memory behavior instead of forcing a new product model.
- It makes storage pluggable without pretending memory semantics should be generic.
- It lets benchmarks and integrations use the same core instead of reimplementing the pipeline.
- It exposes the pipeline as a workflow, which makes the processing stages inspectable and testable.
- It cleanly separates infrastructure concerns from memory behavior.
Benchmarks
The benchmark path now composes over this package through @kognitivedev/memory-bench.
Latest checked-in smoke run:
| Dataset | Adapter | Model | Consolidation | Cases | Local Score | Exact | Token F1 | Abstention Accuracy | Avg Latency |
| -------------------- | ------------------ | -------------------- | ----------------- | ----: | ----------: | ----: | -------: | ------------------: | ----------: |
| longmemeval-sample | kognitive-direct | x-ai/grok-4.1-fast | before-question | 2 | 0.773 | 0.000 | 0.344 | 1.000 | 510.5 ms |
See:
Migration From The Backend-Coupled Implementation
The migration path is designed to avoid data loss:
- Keep the existing memory tables and data model.
- Move orchestration and contracts into
@kognitivedev/memory. - Leave database access in adapters owned by the composing app.
- Prefer schema migrations over destructive schema pushes.
For local setup and benchmark preparation, use:
bun run db:migrateDo not assume db:push is safe on a populated local database.
