@one710/recollect
v1.1.1
Published
Auto-summarizing memory layer for AI agents using Node.js native SQLite.
Maintainers
Readme
@one710/recollect
Recollect is a memory + compaction layer for long-running AI SDK chats.
It keeps full session history, automatically compacts older context when needed, and preserves recent turns and instruction context so your app stays coherent as conversations grow.
Why Recollect
- Works with AI SDK
LanguageModelV3Messageshapes (user/assistant/system/tool, multi-part content, tool calls/results) - Session-based memory with pluggable storage
- Robust compaction strategy with summary checkpoints
- Middleware that can auto-manage prompt/history lifecycle around
generateText - Provider-agnostic (tested with OpenAI and Bedrock integration suites)
Installation
npm install @one710/recollectIf you want SQLite persistence:
npm install sqlite3If you only use InMemoryStorageAdapter, sqlite3 is not required.
Quick Start (Manual Memory API)
import { MemoryLayer } from "@one710/recollect";
import { openai } from "@ai-sdk/openai";
const memory = new MemoryLayer({
maxTokens: 8192,
summarizationModel: openai("gpt-4o-mini"),
});
const sessionId = "chat:user-123";
await memory.addMessage(sessionId, "user", "What should we build next?");
await memory.addMessage(
sessionId,
"assistant",
"Let's prioritize onboarding improvements.",
);
await memory.addMessage(sessionId, null, {
role: "assistant",
content: [
{
type: "tool-call",
toolCallId: "call-1",
toolName: "lookupMetric",
input: { key: "paid_subs_us_pct" } as any,
},
],
});
await memory.addMessage(sessionId, null, {
role: "tool",
content: [
{
type: "tool-result",
toolCallId: "call-1",
toolName: "lookupMetric",
output: { type: "json", value: { key: "paid_subs_us_pct", value: 63.2 } },
},
],
});
const history = await memory.getMessages(sessionId);
console.log(history.length);AI SDK Middleware (Automatic Mode)
withRecollectCompaction(...) can automatically:
- ingest unseen incoming prompt messages
- run optional pre-compaction (
auto-pre) - hydrate the model prompt from memory
- ingest generated assistant/tool messages from model output
- run optional post-compaction (
auto-post)
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { MemoryLayer, withRecollectCompaction } from "@one710/recollect";
const memory = new MemoryLayer({
maxTokens: 8192,
summarizationModel: openai("gpt-4o-mini"),
});
const model = withRecollectCompaction({
model: openai("gpt-4o-mini"),
memory,
preCompact: true,
postCompact: true,
postCompactStrategy: "follow-up-only", // or "always"
});
await generateText({
model,
messages: [{ role: "user", content: "Continue." }],
providerOptions: { recollect: { sessionId: "chat:user-123" } },
});Session ID Resolution
By default, middleware reads session id from:
providerOptions.recollect.sessionId
You can override via resolveSessionId(params).
API Overview
MemoryLayer options
maxTokens(required)summarizationModel(required)threshold(default0.9)targetTokensAfterCompaction(default65%ofmaxTokens)keepRecentUserTurns(default4)keepRecentMessagesMin(default8)maxCompactionPasses(default3)minimumMessagesToCompact(default6)countTokens(optional custom tokenizer)storage(optional custom adapter)databasePath(used only whenstorageis not provided)onCompactionEvent(optional diagnostics hook)
MemoryLayer methods
addMessage(sessionId, role, contentOrMessage)addMessages(sessionId, messages)getMessages(sessionId)getPromptMessages(sessionId)compactNow(sessionId)compactIfNeeded(sessionId, options)getSessionEvents(sessionId, limit?)getSessionSnapshot(sessionId)clearSession(sessionId)dispose()
Storage
Exports:
InMemoryStorageAdaptercreateSQLiteStorageAdapter(databasePath)MemoryStorageAdaptertype (for custom adapters)
Integration Testing (Manual, Real Providers)
These are provider-backed integration runs (not unit tests):
npm run test:integration:openai
npm run test:integration:bedrockRequired env vars
OpenAI:
OPENAI_API_KEY- optional:
RECOLLECT_OPENAI_MODEL(defaultgpt-5-nano)
Bedrock:
AWS_REGIONAWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEY- optional:
AWS_SESSION_TOKEN - optional:
AWS_BEARER_TOKEN_BEDROCK - optional:
RECOLLECT_BEDROCK_MODEL
Covered scenarios
- simple turn
- multi-turn with full-history resend
- existing simple history
- existing tool-call history
- malformed existing history (tool-call without prior tool-result)
- missing assistant messages in prior history
- forced compaction with checkpoint summary validation
Development
npm install
npm run build
npm testLicense
MIT
