@ramenm/soft-llm-stream
v0.6.6
Published
Headless smoothing and flow control for streamed LLM text.
Maintainers
Readme
@ramenm/soft-llm-stream
Small headless smoothing for bursty LLM text streams.

soft-llm-stream sits between an incoming stream and the UI. It normalizes real provider output, keeps reveal speed perceptually steadier, preserves grapheme boundaries, stays soft after long gaps, and finishes without ugly tail snaps.
Install
npm install @ramenm/soft-llm-streamBasic usage
import { createSoftLlmChatStream } from '@ramenm/soft-llm-stream';
const store = createSoftLlmChatStream({
source: fetch('/api/chat'),
adapter: 'auto',
revealProfile: 'fastFirst',
});
store.subscribe(() => {
const snapshot = store.getSnapshot();
render(snapshot.text, snapshot.meta.phase);
});
await store.start();Published package: core-only runtime. Full demo, docs, tests, and validation reports live in the GitHub repo.
