npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@loremllm/transport

v0.5.1

Published

A lightweight transport implementation for the [Vercel AI SDK UI layer](https://v6.ai-sdk.dev/docs/ai-sdk-ui/transport). It lets you describe the exact `UIMessage[]` the UI should display and streams the message back to the client without touching your ne

Readme

@loremllm/transport

A lightweight transport implementation for the Vercel AI SDK UI layer. It lets you describe the exact UIMessage[] the UI should display and streams the message back to the client without touching your network stack or incurring any llm fees.

When to use it

  • Building demos or stories where the response is known ahead of time.
  • Faking AI interactions offline or in tests.
  • Wrapping bespoke backends that already output UIMessage objects.

Installation

pnpm add @loremllm/transport

The package declares a peer dependency on [email protected]; make sure it is available in your workspace.

Quick start

import { StaticChatTransport } from "@loremllm/transport";
import { useChat } from "ai/react";

const transport = new StaticChatTransport({
  chunkDelayMs: 25,
  async *mockResponse() {
    yield { type: "text", text: "Hello! How can I help you today?" };
  },
});

export const DemoChat = () => {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    id: "demo",
    transport,
  });

  // render messages + form
};

Provide a mockResponse async generator function that yields UIMessagePart objects.
All yielded parts are collected into a single assistant message with an auto-generated ID and streamed back to the UI.

Options

| Option | Type | Required | Default | Description | | -------------------- | ----------------------------------------------------------------------------------------------------------- | -------- | ----------- | ----------------------------------------------------------------------------------------------------------- | | mockResponse | (context: StaticTransportContext) => AsyncGenerator<UIMessagePart> | Yes | - | Async generator that yields message parts. All yielded parts are collected into a single assistant message. | | chunkDelayMs | number \| [number, number] \| (chunk: UIMessageChunk) => Promise<number \| [number, number] \| undefined> | No | undefined | Delay between chunk emissions to simulate streaming. See "Customizing chunk timing" section. | | autoChunkText | boolean \| RegExp | No | true | Whether to chunk text parts. true = word-by-word, false = single chunk, RegExp = custom pattern. | | autoChunkReasoning | boolean \| RegExp | No | true | Whether to chunk reasoning parts. true = word-by-word, false = single chunk, RegExp = custom pattern. |

Context

The mockResponse function receives a StaticTransportContext parameter with the following properties:

| Property | Type | Description | | ----------------- | ------------------------------------------ | ----------------------------------------------------------------------- | | id | string | The chat ID for this conversation. | | messages | UIMessage[] | Array of all messages in the conversation history. | | requestMetadata | unknown | Metadata passed from the useChat hook's body or metadata options. | | trigger | "submit-message" \| "regenerate-message" | Whether this is a new message or a regeneration. | | messageId | string \| undefined | The ID of the message being generated, if provided. |

Customizing chunk timing

chunkDelayMs accepts:

// Constant delay for every chunk
chunkDelayMs: 50;

// Random delay between min and max (inclusive)
chunkDelayMs: [20, 100]; // random delay between 20ms and 100ms

// Function that returns a delay per chunk
chunkDelayMs: async (chunk) => (chunk.type === "text-delta" ? 20 : 0);

// Function that returns a tuple for random delay per chunk
chunkDelayMs: async (chunk) => {
  if (chunk.type === "text-delta") {
    return [10, 50]; // random delay between 10ms and 50ms for text deltas
  }
  return 0; // no delay for other chunks
};

Return undefined or 0 to emit the next chunk immediately.

Usage examples

Tool calling

You can simulate tool calls by yielding tool parts. Here's an example inspired by the AI SDK tool calling documentation:

import { StaticChatTransport } from "@loremllm/transport";
import { useChat } from "ai/react";

const transport = new StaticChatTransport({
  async *mockResponse({ messages }) {
    const userMessage = messages[messages.length - 1];
    const userText =
      userMessage?.parts.find((p) => p.type === "text")?.text ?? "";

    // Check if user asked about weather
    if (userText.toLowerCase().includes("weather")) {
      const locationMatch = userText.match(/weather in (.+?)(?:\?|$)/i);
      const location = locationMatch?.[1]?.trim() ?? "San Francisco";

      // Yield a tool call
      yield {
        type: "tool-weather",
        toolCallId: "call_123",
        toolName: "weather",
        state: "output-available",
        input: { location },
        output: {
          location,
          temperature: 72 + Math.floor(Math.random() * 21) - 10,
        },
      };

      // Yield a text response with the tool result
      yield {
        type: "text",
        text: `The weather in ${location} is sunny with a temperature of 68°F.`,
      };
    } else {
      yield { type: "text", text: "How can I help you today?" };
    }
  },
});

export const WeatherChat = () => {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    id: "weather-demo",
    transport,
  });

  // render messages + form
};

Custom data streams

Use custom data-* parts to stream application-specific data that your UI can handle. This is useful for widgets, charts, or other interactive components:

import { StaticChatTransport } from "@loremllm/transport";
import { useChat } from "ai/react";

const transport = new StaticChatTransport({
  async *mockResponse() {
    // Stream a chart widget
    yield {
      type: "data-chart",
      data: {
        type: "line",
        data: [
          { x: "Jan", y: 65 },
          { x: "Feb", y: 72 },
          { x: "Mar", y: 68 },
        ],
      },
    };

    // Stream a notification
    yield {
      type: "data-notification",
      id: "notif-1",
      data: {
        message: "Data has been processed",
        severity: "success",
      },
      transient: true, // This data won't persist in message history
    };

    yield {
      type: "text",
      text: "I've created a chart for you.",
    };
  },
});

export const DataStreamChat = () => {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    id: "data-demo",
    transport,
    onData: (dataPart) => {
      // Handle custom data parts
      if (dataPart.type === "data-chart") {
        // Render your chart component
        console.log("Chart data:", dataPart.data);
      } else if (dataPart.type === "data-notification") {
        // Show notification
        console.log("Notification:", dataPart.data);
      }
    },
  });

  // render messages + form
};

MCP dynamic tools

For Model Context Protocol (MCP) dynamic tools, use the dynamic-tool type:

import { StaticChatTransport } from "@loremllm/transport";
import { useChat } from "ai/react";

const transport = new StaticChatTransport({
  async *mockResponse() {
    // Simulate a dynamic tool from an MCP server
    yield {
      type: "dynamic-tool",
      toolCallId: "call_mcp_123",
      toolName: "mcp-file-read",
      state: "output-available",
      input: {
        path: "/path/to/file.txt",
      },
      output: {
        content: "File contents here...",
        size: 1024,
      },
    };

    yield {
      type: "text",
      text: "I've read the file using the MCP tool.",
    };
  },
});

export const MCPChat = () => {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    id: "mcp-demo",
    transport,
  });

  // render messages + form
};

Abort & reconnect support

  • Requests respect AbortController signals and surface aborts as the same AbortError the AI SDK expects.
  • The transport caches the last assistant message per chatId so reconnectToStream can replay the existing response.

Supported message parts

The stream builder currently supports:

  • text, reasoning
  • file
  • source-url, source-document
  • tool-* parts (e.g., tool-search, tool-booking) and dynamic-tool
  • Custom data-* parts

Tool parts automatically emit the appropriate chunks (tool-input-available, tool-output-available, or tool-output-error) based on the part's state and properties.

Encountering an unsupported part type throws so the UI can flag the issue. Extend createChunksFromMessage if you need more chunk types.

Extending the transport

StaticChatTransport exposes the raw class if you want to subclass it, override createStreamFromChunks, or plug your own caching layer.
For complex real transports, consider implementing ChatTransport directly so you can forward the stream from your backend without converting to UIMessage first.

Copy messages to clipboard

The copyMessagesToClipboard function can be used to copy a real llm response to the clipboard as a static transport template.

import { copyMessagesToClipboard } from "@loremllm/transport";
import { useChat } from "ai/react";

const { messages, sendMessage, status, error } = useChat({
  onFinish: copyMessagesToClipboard,
});