npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

react-partial-stream

v0.1.0

Published

Headless React (and React Native) primitives for streaming LLM responses — text, tool calls, thinking, and partial JSON.

Readme

react-partial-stream

Headless React (and React Native) primitives for streaming LLM responses. Provider-agnostic, UI-agnostic, with first-class support for partial JSON streaming so tool-call arguments and structured outputs render as they arrive.

Why

Most React AI libraries either ship opinionated UI (chat bubbles, prebuilt panels) or couple tightly to one provider. react-partial-stream is just hooks and types: feed it a stream of chunks, get back React state you can render however you want.

The wedge: partial JSON parsing. While a tool call's arguments are still streaming in, you can already read the partially-parsed object — type-safe, with isPartial flags so you know what's settled.

Install

npm install react-partial-stream

Quick start

The hook takes any AsyncIterable<StreamChunk> and gives you a React-state view of the assistant's response:

import { useStreamingMessage } from "react-partial-stream";
import type { StreamSource } from "react-partial-stream";

function Assistant({ stream }: { stream: StreamSource }) {
  const { message, isStreaming } = useStreamingMessage(stream);

  return (
    <div>
      {message.content.map((block, i) => {
        if (block.type === "text") return <p key={i}>{block.text}</p>;
        if (block.type === "thinking") return <pre key={i}>{block.text}</pre>;
        if (block.type === "tool-call") {
          return (
            <pre key={i}>
              {block.name}({JSON.stringify(block.args)})
              {block.isPartial && " (streaming…)"}
            </pre>
          );
        }
        return null;
      })}
      {isStreaming && <span aria-label="streaming">▍</span>}
    </div>
  );
}

End-to-end with a provider

You bring the stream. The reference adapters convert a provider SDK's stream into the StreamChunk shape the hooks consume. Below is the OpenAI flavor running in a React Server Component (so the API key stays on the server):

import OpenAI from "openai";
import { fromOpenAIStream } from "./adapters/openai"; // copy from examples/adapters/

const client = new OpenAI();

async function* getStream() {
  const completion = await client.chat.completions.create({
    model: "gpt-4o-mini",
    stream: true,
    messages: [{ role: "user", content: "Hello!" }],
  });
  yield* fromOpenAIStream(completion);
}

export default function Page() {
  return <Assistant stream={getStream()} />;
}

For browser-side rendering, proxy the request through your backend and parse the chunks back into the same AsyncIterable shape — the hook doesn't care where the stream came from.

Streaming structured output

For typed JSON outputs (function call args, structured generation), useStructuredOutput gives you a typed value that fills in as the model emits tokens:

const { value, isPartial } = useStructuredOutput<{ items: string[] }>(stream);
// value?.items renders progressively: ["a"], ["a","b"], … as JSON arrives.
// isPartial flips to false once the stream emits its terminal finish chunk.

API

  • useStreamingMessage(stream, signal?) — accumulate chunks into an AssistantMessage with text, thinking, and tool-call blocks
  • useToolCall(message, id) — selector for a single tool call's state
  • useStructuredOutput<T>(stream, signal?) — typed partial JSON value that fills in as it streams
  • parsePartialJSON(input) — the underlying parser, exported for direct use

Knowing why a stream ended

Both stream hooks return finishReason: "stop" | "length" | "tool_use" | undefined. It stays undefined while streaming and is set when the stream emits its terminal finish chunk, so you can distinguish a clean completion from a token-limit truncation or a tool-call handoff. If the stream errored, error is set and finishReason stays undefined — the two are mutually exclusive.

const { message, isStreaming, finishReason, error } = useStreamingMessage(stream);

if (error) return <Error message={error.message} />;
if (!isStreaming && finishReason === "length") return <Truncated message={message} />;
if (!isStreaming && finishReason === "tool_use") return <RunTool message={message} />;

Cancellation

Pass an AbortSignal to stop a stream from outside the component. Unmounting also cancels — both paths call iter.return() on the iterator so producers get a chance to release resources.

const controller = useMemo(() => new AbortController(), []);
const { message } = useStreamingMessage(stream, controller.signal);
// later: controller.abort();

Providers

react-partial-stream doesn't talk to any LLM directly. You bring the stream — official adapters for Anthropic, OpenAI, etc. are planned as separate packages so the core stays tiny and dependency-free.

Reference adapters for OpenAI and Anthropic live in examples/adapters/ — copy them into your project or use them as templates for other providers. They map each provider's native chunk type to the StreamChunk shape the hooks consume.

React Native

The hooks are platform-agnostic — they only use React core, AbortController, and async iterators, all supported by Hermes. Render to <View>/<Text> instead of the web tags shown in the examples.

The catch is getting a stream into RN: fetch on RN doesn't support streaming response bodies out of the box, so you'll typically need a polyfill like react-native-fetch-api or proxy through your backend. Once you produce an AsyncIterable<StreamChunk>, the hooks work identically.

License

MIT