@sisu-ai/mw-conversation-buffer
v9.0.0
Published
Helpers for shaping basic conversation state. Keep recent messages small and relevant without implementing your own trimming logic.
Downloads
58
Maintainers
Readme
@sisu-ai/mw-conversation-buffer
Helpers for shaping basic conversation state. Keep recent messages small and relevant without implementing your own trimming logic.
Setup
npm i @sisu-ai/mw-conversation-bufferExports
inputToMessage— appends{ role:'user', content: ctx.input }when present.conversationBuffer({ window=12 })— keeps the first message and the lastwindowmessages.
What It Does
- Converts
ctx.inputinto a user chat message. - Prunes older messages with a simple, fast sliding window.
This pair is intentionally tiny and deterministic. It never summarizes or alters message contents — it only appends and trims.
How It Works
inputToMessage: Ifctx.inputis set, appends{ role: 'user', content: ctx.input }toctx.messages, then callsnext().conversationBuffer({ window = 12 }): Ifctx.messages.length > window, it keeps the first message (commonly a system prompt) plus the lastwindowmessages, mutatingctx.messagesin place.
Why keep “first + last N”? The first message is usually your system instruction; the tail is the most recent conversational state. This rule is robust for many apps.
Usage
import { inputToMessage, conversationBuffer } from '@sisu-ai/mw-conversation-buffer';
const app = new Agent()
.use(inputToMessage)
.use(conversationBuffer({ window: 12 }));Recommended ordering:
- Place
inputToMessageearly so downstream middleware sees a full message list. - Apply
conversationBufferafter appending new messages (user/tool) and before generation to cap context size.
When To Use
- Chat apps/CLIs where conversation grows and you need bounded context.
- Prototypes and demos that benefit from predictable behavior.
- As a guardrail before providers with strict token limits.
When Not To Use
- Single‑turn flows that don’t keep history.
- Workflows that manage context elsewhere (RAG pipelines or custom budgeting).
- Cases requiring semantic compression/summarization (use a compressor middleware instead).
Notes & Gotchas
- Role‑agnostic trim: trimming is positional, not role‑aware. If you must always keep specific roles/messages, compose your own policy.
- System prompt stability: the first message is preserved; ensure it’s the one you want to keep.
- Message vs token:
windowis in messages, not tokens. For strict token budgets, pair with usage tracking or a tokenizer‑aware compressor. - In‑place mutation:
conversationBuffermutatesctx.messages. Create references after trimming if you pass them elsewhere.
Community & Support
Discover what you can do through examples or documentation. Check it out at https://github.com/finger-gun/sisu. Example projects live under examples/ in the repo.
