npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@tanstack/ai-solid

v0.2.2

Published

Solid hooks for TanStack AI

Readme

@tanstack/ai-react

React hooks for building AI chat interfaces with TanStack AI.

Installation

npm install @tanstack/ai-react @tanstack/ai-client

useChat Hook

The useChat hook manages chat state, handles streaming responses, and provides a complete chat interface in a single hook.

Design Philosophy (v5 API):

  • You control input state
  • Just call sendMessage() when ready
  • No form-centric API - use buttons, keyboard events, or any trigger
  • More flexible and less opinionated

Basic Usage

import { useChat, fetchServerSentEvents } from "@tanstack/ai-react";
import { useState } from "react";

function ChatComponent() {
  const { messages, sendMessage, isLoading } = useChat({
    connection: fetchServerSentEvents("/api/chat"),
  });

  const [input, setInput] = useState("");

  const handleSend = () => {
    sendMessage(input);
    setInput("");
  };

  return (
    <div>
      {messages.map((m) => (
        <div key={m.id}>
          <strong>{m.role}:</strong> {m.content}
        </div>
      ))}

      <input
        value={input}
        onChange={(e) => setInput(e.target.value)}
        onKeyDown={(e) => e.key === "Enter" && handleSend()}
        disabled={isLoading}
      />
      <button onClick={handleSend} disabled={isLoading || !input.trim()}>
        Send
      </button>
    </div>
  );
}

API

Options

interface UseChatOptions {
  // Connection adapter (required)
  connection: ConnectionAdapter

  // Configuration
  initialMessages?: UIMessage[] // Starting messages
  id?: string // Unique chat ID
  body?: Record<string, any> // Extra data to send

  // Callbacks
  onResponse?: (response?: Response) => void
  onChunk?: (chunk: StreamChunk) => void
  onFinish?: (message: UIMessage) => void
  onError?: (error: Error) => void
}

Return Value

interface UseChatReturn {
  messages: UIMessage[] // Current conversation
  sendMessage: (content: string) => Promise<void> // Send a message
  append: (message) => Promise<void> // Add message programmatically
  reload: () => Promise<void> // Reload last response
  stop: () => void // Stop current generation
  isLoading: boolean // Is generating a response
  error: Error | undefined // Current error
  setMessages: (messages) => void // Set messages manually
  clear: () => void // Clear all messages
}

Connection Adapters

Connection adapters provide flexible streaming for different scenarios. See the complete guides:

Quick Examples

SSE (Most Common):

import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'

const chat = useChat({
  connection: fetchServerSentEvents('/api/chat'),
})

Server Functions:

import { useChat, stream } from '@tanstack/ai-react'

const chat = useChat({
  connection: stream((messages) => serverChatFunction({ messages })),
})

Custom (e.g., WebSockets):

import { useChat } from '@tanstack/ai-react'
import type { ConnectionAdapter } from '@tanstack/ai-client'

const wsAdapter: ConnectionAdapter = {
  async *connect(messages) {
    // Your WebSocket logic
  },
}

const chat = useChat({ connection: wsAdapter })

Backend Endpoint

Your backend should use the chat() method which automatically handles tool execution in a loop:

  1. Receive POST requests with this body:
{
  messages: Message[];
  data?: Record<string, any>;
}
  1. Use chat() to stream responses (with automatic tool execution):
import { chat, toServerSentEventsResponse } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'

export async function POST(request: Request) {
  const { messages } = await request.json()

  const stream = chat({
    adapter: openaiText(),
    model: 'gpt-4o',
    messages,
    tools: [weatherTool], // Optional: auto-executed in loop
    agentLoopStrategy: maxIterations(5), // Optional: control loop
  })

  // Convert to HTTP streaming response with SSE headers
  return toServerSentEventsResponse(stream)
}

The response streams StreamChunk objects as Server-Sent Events:

data: {"type":"content","delta":"Hello","content":"Hello",...}
data: {"type":"tool_call","toolCall":{...},...}
data: {"type":"tool_result","toolCallId":"...","content":"...",...}
data: {"type":"content","delta":" world","content":"Hello world",...}
data: {"type":"done","finishReason":"stop","usage":{...}}

Note: The chat() method automatically executes tools and emits tool_result chunks - you don't need to handle tool execution manually!

Advanced Usage

With Callbacks

import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'

const { messages, sendMessage } = useChat({
  connection: fetchServerSentEvents('/api/chat'),
  onChunk: (chunk) => {
    if (chunk.type === 'content') {
      console.log('New token:', chunk.delta)
    }
  },
  onFinish: (message) => {
    console.log('Final message:', message)
    // Save to database, log analytics, etc.
  },
  onError: (error) => {
    console.error('Chat error:', error)
    // Show toast notification, log error, etc.
  },
})

// Send messages programmatically
await sendMessage('Tell me a joke')

Flexible Triggering

import { useChat, fetchServerSentEvents } from "@tanstack/ai-react";

const { sendMessage, isLoading } = useChat({
  connection: fetchServerSentEvents("/api/chat")
});
const [input, setInput] = useState("");

// Button click
<button onClick={() => sendMessage(input)}>Send</button>

// Enter key
<input onKeyDown={(e) => e.key === "Enter" && sendMessage(input)} />

// Voice input
<button onClick={async () => {
  const transcript = await voiceToText();
  sendMessage(transcript);
}}>🎤 Speak</button>

// Predefined prompts
<button onClick={() => sendMessage("Explain quantum computing")}>
  Ask about quantum computing
</button>

With Custom Headers

import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'

const chat = useChat({
  connection: fetchServerSentEvents('/api/chat', {
    headers: {
      Authorization: `Bearer ${token}`,
      'X-Custom-Header': 'value',
    },
  }),
  body: {
    userId: '123',
    sessionId: 'abc',
  },
})

Programmatic Control

const { messages, sendMessage, append, reload, stop, clear } = useChat()

// Send a simple message
await sendMessage('Hello!')

// Add a message with more control
await append({
  role: 'user',
  content: 'Hello!',
  id: 'custom-id',
})

// Reload the last AI response
await reload()

// Stop the current generation
stop()

// Clear all messages
clear()

Multiple Chats

import { useChat, fetchServerSentEvents } from '@tanstack/ai-react'

function App() {
  const chat1 = useChat({
    id: 'chat-1',
    connection: fetchServerSentEvents('/api/chat'),
  })
  const chat2 = useChat({
    id: 'chat-2',
    connection: fetchServerSentEvents('/api/chat'),
  })

  // Each hook manages independent state
}

Example Backend (Node.js/Express)

import express from 'express'
import { chat, toServerSentEventsResponse } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'

const app = express()
app.use(express.json())

app.post('/api/chat', async (req, res) => {
  const { messages } = req.body

  // One line to create streaming response!
  const stream = chat({
    adapter: openaiText(),
    model: 'gpt-4o',
    messages,
  })

  const response = toServerSentEventsResponse(stream)

  // Copy headers and stream to Express response
  response.headers.forEach((value, key) => {
    res.setHeader(key, value)
  })

  const reader = response.body?.getReader()
  if (reader) {
    while (true) {
      const { done, value } = await reader.read()
      if (done) break
      res.write(value)
    }
  }
  res.end()
})

app.listen(3000)

Example Backend (Next.js App Router)

// app/api/chat/route.ts
import { chat, toServerSentEventsResponse } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'

export const runtime = 'edge'

export async function POST(req: Request) {
  const { messages } = await req.json()

  // One line!
  return toServerSentEventsResponse(
    chat({
      adapter: openaiText(),
      model: 'gpt-4o',
      messages,
    }),
  )
}

Example Backend (TanStack Start)

import { createFileRoute } from '@tanstack/react-router'
import { chat, toServerSentEventsResponse } from '@tanstack/ai'
import { anthropicText } from '@tanstack/ai-anthropic'

export const Route = createFileRoute('/api/chat')({
  server: {
    handlers: {
      POST: async ({ request }) => {
        const { messages } = await request.json()

        // One line with automatic tool execution!
        return toServerSentEventsResponse(
          chat({
            adapter: anthropicText(),
            model: 'claude-sonnet-4-20250514',
            messages,
            tools, // Tools with execute functions
          }),
        )
      },
    },
  },
})

TypeScript Types

All types are fully exported:

import type {
  UIMessage,
  UseChatOptions,
  UseChatReturn,
  ChatRequestBody,
} from '@tanstack/ai-react'

Features

  • ✅ Automatic message state management
  • ✅ Streaming response handling
  • ✅ Loading and error states
  • ✅ Simple sendMessage() API (v5 style)
  • ✅ You control input state (flexible)
  • ✅ Abort/stop generation
  • ✅ Reload last response
  • ✅ Clear conversation
  • ✅ Custom headers and body data (via connection adapter options)
  • ✅ Callback hooks for lifecycle events
  • ✅ Multiple concurrent chats
  • ✅ Full TypeScript support

License

MIT