npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ai-stream-utils

v2.2.0

Published

AI SDK: Filter and transform UI messages while streaming to the client

Readme

ai-stream-utils

This library provides composable filter and transformation utilities for UI message streams created by streamText() in the AI SDK.

Why?

The AI SDK UI message stream created by toUIMessageStream() streams all parts (text, tools, reasoning, etc.) to the client by default. However, you may want to:

  • Filter: Tool calls like database searches often contain large amounts of data or sensitive information that should not be streamed to the client
  • Transform: Modify text or tool outputs while they are streamed to the client
  • Observe: Log stream lifecycle events, update states, or run side-effects without modifying the stream

This library provides type-safe, composable utilities for all these use cases.

Installation

This library supports AI SDK v5 and v6.

npm install ai-stream-utils

Usage

The pipe function provides a composable pipeline API for filtering, transforming, and observing UI message streams. Multiple operators can be chained together, and type guards automatically narrow chunk and part types, thus enabling type-safe stream transformations with autocomplete.

.filter()

Filter chunks by returning true to keep or false to exclude.

const stream = pipe(result.toUIMessageStream())
  .filter(({ chunk, part }) => {
    // chunk.type: "text-delta" | "text-start" | "tool-input-available" | ...
    // part.type: "text" | "reasoning" | "tool-weather" | ...

    if (chunk.type === "data-weather") {
      return false; // exclude chunk
    }

    return true; // keep chunk
  })
  .toStream();

Type Guards

Generic type guards provide a simpler API for common filtering patterns:

  • includeChunks("text-delta") or includeChunks(["text-delta", "text-end"]): Include only specific chunk types
  • excludeChunks("text-delta") or excludeChunks(["text-delta", "text-end"]): Exclude only specific chunk types
  • includeParts("text") or includeParts(["text", "reasoning"]): Include only specific part types
  • excludeParts("reasoning") or excludeParts(["reasoning", "tool-database"]): Exclude only specific part types

Filtering tools is the most common use case and the tool-filter type guards provide a convenient API for filtering tool chunks by tool name:

  • excludeTools() or excludeTools("weather") or excludeTools(["weather", "database"]): Exclude all tools or specific tools by name
  • includeTools() or includeTools("weather") or includeTools(["weather", "database"]): Include all tools or specific tools by name

[!NOTE] The tool-filter type guards only affect tool chunks. Non-tool chunks will pass through.

Examples

Exclude tool calls from the client.

// Exclude by part type (requires "tool-" prefix)
const stream = pipe(result.toUIMessageStream())
  .filter(excludeParts(["tool-weather", "tool-database"]))
  .toStream();

// Exclude by tool name (without "tool-" prefix)
const stream = pipe(result.toUIMessageStream())
  .filter(excludeTools(["weather", "database"]))
  .toStream();

// Exclude all tools
const stream = pipe(result.toUIMessageStream()).filter(excludeTools()).toStream();

// Include only specific tools (without "tool-" prefix)
const stream = pipe(result.toUIMessageStream())
  .filter(includeTools(["weather"]))
  .toStream();

[!NOTE] excludeTools() and includeTools() filters tool chunks on the server before streaming to the client. This affects all tool types including:

  • Server-side tools with execute functions
  • Client-side tools without execute functions
  • Tools that require human approval via needsApproval

Excluded tools will not appear in the client's message parts, so users won't see tool call UI or be able to approve/reject filtered tools.

.map()

Transform chunks by returning a chunk, an array of chunks, or null to exclude.

const stream = pipe(result.toUIMessageStream())
  .map(({ chunk, part }) => {
    // chunk.type: "text-delta" | "text-start" | "tool-input-available" | ...
    // part.type: "text" | "reasoning" | "tool-weather" | ...

    if (chunk.type === "text-start") {
      return chunk; // pass through unchanged
    }

    if (chunk.type === "text-delta") {
      return { ...chunk, delta: "modified" }; // transform chunk
    }

    if (chunk.type === "data-weather") {
      return [chunk1, chunk2]; // emit multiple chunks
    }

    return null; // exclude chunk (same as filter)
  })
  .toStream();

Examples

Convert text to uppercase.

const stream = pipe(result.toUIMessageStream())
  .map(({ chunk }) => {
    if (chunk.type === "text-delta") {
      return { ...chunk, delta: chunk.delta.toUpperCase() };
    }

    return chunk;
  })
  .toStream();

.on()

Observe chunks without modifying the stream. The callback is invoked for matching chunks.

const stream = pipe(result.toUIMessageStream())
  .on(
    ({ chunk, part }) => {
      // return true to invoke callback, false to skip
      return chunk.type === "text-delta";
    },
    ({ chunk, part }) => {
      // callback invoked for matching chunks
      console.log(chunk, part);
    },
  )
  .toStream();

Type Guards

Type guard provides a type-safe way to observe specific chunk types:

  • chunkType("text-delta") or chunkType(["start", "finish"]): Observe specific chunk types
  • partType("text") or partType(["text", "reasoning"]): Observe chunks belonging to specific part types
  • toolCall() or toolCall({ tool: "weather" }) or toolCall({ state: "output-available" }): Observe tool state transitions

The toolCall() guard matches tool chunks representing state transitions (not streaming events):

  • input-available: Tool input fully parsed
  • approval-requested: Tool awaiting user approval
  • output-available: Tool execution completed
  • output-error: Tool execution failed
  • output-denied: User denied approval

[!NOTE] The partType type guard still operates on chunks. That means partType("text") will match any text chunks such as text-start, text-delta, and text-end.

Examples

Log stream lifecycle events.

const stream = pipe(result.toUIMessageStream())
  .on(chunkType("start"), ({ chunk }) => {
    console.log("Stream started:", chunk.messageId);
  })
  .on(chunkType("finish"), ({ chunk }) => {
    console.log("Stream finished:", chunk.finishReason);
  })
  .on(chunkType("tool-input-available"), ({ chunk }) => {
    console.log("Tool input:", chunk.toolName, chunk.input);
  })
  .on(chunkType("tool-output-available"), ({ chunk }) => {
    console.log("Tool output:", chunk.toolName, chunk.output);
  })
  .toStream();

Observe tool state transitions.

const stream = pipe(result.toUIMessageStream())
  .on(toolCall({ tool: "weather", state: "approval-requested" }), ({ chunk }) => {
    console.log("Weather tool needs approval");
  })
  .toStream();

[!NOTE] The tool option filters by part type (tool-{name}). Dynamic tools have part type dynamic-tool, so toolCall({ tool: "myTool" }) will not match dynamic tools. Use toolCall() without the tool option to observe all tools including dynamic ones.

.toStream()

Convert the pipeline back to a AsyncIterableStream<InferUIMessageChunk<UI_MESSAGE>> that can be returned to the client or consumed.

const stream = pipe(result.toUIMessageStream())
  .filter(({ chunk }) => {})
  .map(({ chunk }) => {})
  .toStream();

// Iterate with for-await-of
for await (const chunk of stream) {
  console.log(chunk);
}

// Consume as ReadableStream
for await (const message of readUIMessageStream({ stream })) {
  console.log(message);
}

// Return to client with useChat()
return stream;

Chaining and Type Narrowing

Multiple operators can be chained together. After filtering with type guards, chunk and part types are narrowed automatically.

const stream = pipe<MyUIMessage>(result.toUIMessageStream())
  .filter(includeParts("text"))
  .map(({ chunk, part }) => {
    // chunk is narrowed to text chunks: "text-start" | "text-delta" | "text-end"
    // part is narrowed to "text"
    return chunk;
  })
  .toStream();

Control Chunks

Control chunks always pass through regardless of filter/transform settings:

  • start: Stream start marker
  • finish: Stream finish marker
  • abort: Stream abort marker
  • message-metadata: Message metadata updates
  • error: Error messages

Stream Utilities

Helper functions for consuming streams and converting between streams, arrays, and async iterables.

consumeUIMessageStream

Consumes a UI message stream by fully reading it and returns the final assembled message. Useful for server-side processing without streaming to the client.

import { consumeUIMessageStream } from "ai-stream-utils";

const result = streamText({
  model: openai("gpt-4o"),
  prompt: "Tell me a joke",
});

const message = await consumeUIMessageStream(result.toUIMessageStream<MyUIMessage>());

console.log(message.parts); // All parts fully assembled

createAsyncIterableStream

Adds async iterator protocol to a ReadableStream, enabling for await...of loops.

import { createAsyncIterableStream } from "ai-stream-utils";

const asyncStream = createAsyncIterableStream(readableStream);
for await (const chunk of asyncStream) {
  console.log(chunk);
}

convertArrayToStream

Converts an array to a ReadableStream that emits each element.

import { convertArrayToStream } from "ai-stream-utils";

const stream = convertArrayToStream([1, 2, 3]);

convertAsyncIterableToStream

Converts an async iterable (e.g., async generator) to a ReadableStream.

import { convertAsyncIterableToStream } from "ai-stream-utils";

async function* generator() {
  yield 1;
  yield 2;
}
const stream = convertAsyncIterableToStream(generator());

convertAsyncIterableToArray

Collects all values from an async iterable into an array.

import { convertAsyncIterableToArray } from "ai-stream-utils";

const array = await convertAsyncIterableToArray(asyncIterable);

convertStreamToArray

Consumes a ReadableStream and collects all chunks into an array.

import { convertStreamToArray } from "ai-stream-utils";

const array = await convertStreamToArray(readableStream);

convertUIMessageToSSEStream

Converts a UI message stream to an SSE (Server-Sent Events) stream. Useful for sending UI message chunks over HTTP as SSE-formatted text.

import { convertUIMessageToSSEStream } from "ai-stream-utils";

const uiStream = result.toUIMessageStream();
const sseStream = convertUIMessageToSSEStream(uiStream);

// Output format: "data: {...}\n\n" for each chunk

convertSSEToUIMessageStream

Converts an SSE stream back to a UI message stream. Useful for parsing SSE-formatted responses on the client.

import { convertSSEToUIMessageStream } from "ai-stream-utils";

const response = await fetch("/api/chat");
const sseStream = response.body.pipeThrough(new TextDecoderStream());
const uiStream = convertSSEToUIMessageStream(sseStream);

Deprecated Functions

[!WARNING] These functions are deprecated and will be removed in a future version. Use pipe() instead.

mapUIMessageStream

import { mapUIMessageStream } from "ai-stream-utils";

const stream = mapUIMessageStream(result.toUIMessageStream(), ({ chunk }) => {
  if (chunk.type === "text-delta") {
    return { ...chunk, delta: chunk.delta.toUpperCase() };
  }
  return chunk;
});

filterUIMessageStream

import { filterUIMessageStream, includeParts } from "ai-stream-utils";

const stream = filterUIMessageStream(
  result.toUIMessageStream(),
  includeParts(["text", "tool-weather"]),
);

flatMapUIMessageStream

import { flatMapUIMessageStream, partTypeIs } from "ai-stream-utils";

const stream = flatMapUIMessageStream(
  result.toUIMessageStream(),
  partTypeIs("tool-weather"),
  ({ part }) => {
    if (part.state === "output-available") {
      return {
        ...part,
        output: { ...part.output, temperature: toFahrenheit(part.output.temperature) },
      };
    }
    return part;
  },
);

Type Safety

The toUIMessageStream() from streamText() returns a generic ReadableStream<UIMessageChunk>, which means the part types cannot be inferred automatically.

To enable autocomplete and type-safety, pass your UIMessage type as a generic parameter:

import type { UIMessage, InferUITools } from "ai";

type MyUIMessageMetadata = {};
type MyDataPart = {};
type MyTools = InferUITools<typeof tools>;

type MyUIMessage = UIMessage<MyUIMessageMetadata, MyDataPart, MyTools>;

// Use MyUIMessage type when creating the UI message stream
const uiStream = result.toUIMessageStream<MyUIMessage>();

// Type-safe filtering with autocomplete
const stream = pipe<MyUIMessage>(uiStream)
  .filter(includeParts(["text", "tool-weather"])) // Autocomplete works!
  .map(({ chunk, part }) => {
    // part.type is typed based on MyUIMessage
    return chunk;
  })
  .toStream();

Client-Side Usage

The transformed stream has the same type as the original UI message stream. You can consume it with useChat() or readUIMessageStream().

Since message parts may be different on the client vs. the server, you may need to reconcile message parts when the client sends messages back to the server.

If you save messages to a database and configure useChat() to only send the last message, you can read existing messages from the database. This means the model will have access to all message parts, including filtered parts not available on the client.