npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@autoagents/agent-sdk

v2.2.1

Published

A streaming chat client SDK for AI Agent conversations

Readme

@autoagents/agent-sdk

A lightweight, type-safe streaming chat client SDK for AutoAgents (AA) conversations. Built with TypeScript and native Web APIs.

Features

  • 🔄 Streaming Support - Real-time SSE (Server-Sent Events) streaming
  • 🎯 Type Safe - Full TypeScript support with comprehensive type definitions
  • 🪶 Zero Dependencies - Uses native browser APIs only
  • ⚡️ Async Generator - Modern AsyncGenerator pattern for clean stream handling
  • 🎨 Rich Metadata - Supports reasoning content, thinking time, and custom metadata
  • 🛡️ Flexible Error Handling - Simple try-catch for basic needs, advanced lifecycle callbacks for complex scenarios
  • 🎛️ Lifecycle Hooks (Optional) - onopen, onerror, and onclose callbacks for advanced control
  • 🔍 Multi-Layer Data Access - Access parsed data, raw JSON, or complete SSE lines for debugging and custom processing

Installation

npm install @autoagents/agent-sdk
yarn add @autoagents/agent-sdk
pnpm add @autoagents/agent-sdk
bun add @autoagents/agent-sdk

Quick Start

Basic Usage

import { chat } from '@autoagents/agent-sdk';

// Simple and clean API
try {
  for await (const { messages } of chat('https://api.example.com/chat', {
    token: 'your-auth-token',
    body: {
      agentId: 'agent-123',
      userChatInput: 'Hello, how are you?',
    },
  })) {
    console.log('Messages:', messages);
  }
  console.log('Chat completed successfully');
} catch (error) {
  console.error('Chat error:', error);
} finally {
  console.log('Chat ended');
}

With Conversation Context

import { chat } from '@autoagents/agent-sdk';

try {
  for await (const { messages, conversationId, chatId } of chat('https://api.example.com/chat', {
    token: 'your-auth-token',
    body: {
      agentId: 'agent-123',
      userChatInput: 'Hello, how are you?',
    },
  })) {
    console.log('Current messages:', messages);
    console.log('Conversation ID:', conversationId);
    console.log('Chat ID:', chatId);
  }
} catch (error) {
  console.error('Chat failed:', error);
}

With Cancellation

import { chat } from '@autoagents/agent-sdk';

const controller = new AbortController();

// Cancel after 5 seconds
setTimeout(() => controller.abort(), 5000);

try {
  for await (const { messages } of chat('https://api.example.com/chat', {
    token: 'your-auth-token',
    body: {
      agentId: 'agent-123',
      userChatInput: 'Hello, how are you?',
    },
    signal: controller.signal,
  })) {
    console.log('Messages:', messages);
  }
} catch (error) {
  console.error('Chat error:', error);
} finally {
  // Cleanup regardless of success or failure
  hideLoadingIndicator();
}

Error Handling Strategies

When to Use try-catch-finally

Use try-catch-finally for most scenarios:

import { chat } from '@autoagents/agent-sdk';

try {
  for await (const { messages } of chat(url, options)) {
    updateUI(messages);
  }
  // Success: Loop completed normally
  showSuccessMessage();
} catch (error) {
  // Error: Something went wrong
  showErrorMessage(error.message);
} finally {
  // Always: Cleanup regardless of outcome
  hideLoadingIndicator();
}

Pros:

  • ✅ Simple and familiar pattern
  • ✅ Works for 90% of use cases
  • ✅ Easy to understand and maintain

Cons:

  • ❌ Cannot distinguish between completion types (normal vs aborted)
  • ❌ Need to parse error messages for specific HTTP status codes
  • ❌ Less type-safe error handling

When to Use onclose Callback

Use onclose when you need fine-grained control:

import { chat, StreamCloseCode } from '@autoagents/agent-sdk';

for await (const { messages } of chat(url, {
  ...options,
  onclose: ({ code, statusCode }) => {
    if (code === StreamCloseCode.HTTP_ERROR && statusCode === 401) {
      redirectToLogin();
    } else if (code === StreamCloseCode.COMPLETED) {
      showSuccessMessage();
    } else if (code === StreamCloseCode.ABORTED) {
      showCancelMessage();
    }
  },
})) {
  updateUI(messages);
}

Pros:

  • ✅ Type-safe error codes with StreamCloseCode
  • ✅ Direct access to HTTP status codes
  • ✅ Can distinguish normal completion from user cancellation
  • ✅ Better for complex error handling scenarios

Cons:

  • ❌ More verbose
  • ❌ Overkill for simple applications

Decision Guide

| Scenario | Recommended Approach | |----------|---------------------| | Simple chat widget | try-catch-finally | | Need to distinguish 401 vs 429 vs 500 | onclose callback | | Need to show different UI for "completed" vs "cancelled" | onclose callback | | Just need basic error handling | try-catch-finally | | Complex enterprise application | onclose callback |

API Reference

chat(url, options)

Main function to initiate a streaming chat conversation. Simple, clean, and powerful.

Parameters:

  • url (string): The API endpoint URL
  • options (object):
    • token (string): Authentication token
    • body (ChatRequestBody): Request payload
    • signal (AbortSignal, optional): Abort signal for cancellation
    • onopen (() => void, optional): Callback when connection successfully opens
    • onerror ((error: Error) => void, optional): Callback when an error occurs (may be called multiple times)
    • onclose ((info: CloseInfo) => void, optional): Callback when connection closes (called once)

Returns: AsyncGenerator<ChatMessageStreamYield>

Types

ChatRequestBody

interface ChatRequestBody {
  agentId: string;
  chatId?: string;
  userChatInput: string;
  files?: { fileId: string; fileName: string; fileUrl: string; }[];
  images?: { url: string }[];
  kbIdList?: number[];
  database?: {
    databaseUuid: string;
    tableNames: string[];
  };
  state?: any;
  trialOperation?: boolean;
}

ChatMessage

interface ChatMessage {
  content: string;
  role: "assistant" | "user";
  messageId: string;
  loading: boolean;
  metadata?: Record<string, {
    complete: boolean;
    result?: any[];
    type?: string;
  }>;
  type: "text" | "table" | "buttons" | "result_file";
  reasoningContent?: string;
  thinkingElapsedMillSecs?: number;
  __raw?: any;
}

ChatMessageStreamYield

interface ChatMessageStreamYield {
  messages: ChatMessage[];       // Array of accumulated messages
  conversationId?: string;        // Conversation identifier
  chatId?: string;                // Chat identifier for continuation
  chunk?: ChatStreamChunk;        // Parsed chunk object
  rawChunk?: string;              // Raw JSON text (without "data:" prefix)
  sseLine?: string;               // Complete SSE line (with "data:" prefix)
  error?: {                       // Parse error (non-fatal, stream continues)
    message: string;              // Error message
    failedInput?: string;         // The raw input that failed to parse
    cause?: unknown;              // Original error object
  };
}

Understanding Data Layers:

The SDK exposes three levels of data access for advanced use cases:

  • chunk - Parsed JSON object (recommended for most cases)
    • Example: {content: "hello", complete: false}
    • Direct access to structured data
  • rawChunk - Raw JSON string without SSE prefix
    • Example: "{\"content\":\"hello\",\"complete\":false}"
    • Useful for custom parsing or logging
  • sseLine - Complete SSE protocol line
    • Example: "data: {\"content\":\"hello\",\"complete\":false}"
    • Useful for protocol-level debugging

For detailed examples and use cases, see Working with Stream Data Layers.

StreamCloseCode

Error codes for stream closure:

const StreamCloseCode = {
  COMPLETED: 'COMPLETED',           // Normal completion
  ABORTED: 'ABORTED',               // User cancelled
  HTTP_ERROR: 'HTTP_ERROR',         // HTTP non-2xx status
  NETWORK_ERROR: 'NETWORK_ERROR',   // Network failure
  STREAM_ERROR: 'STREAM_ERROR',     // Stream processing error
  UNKNOWN_ERROR: 'UNKNOWN_ERROR',   // Unrecognized error
} as const;

CloseInfo

Information provided to onclose callback:

interface CloseInfo {
  code: StreamCloseCode;      // Error code (see above)
  message: string;            // Human-readable description
  error?: Error;              // Error object (if applicable)
  statusCode?: number;        // HTTP status code (only for HTTP_ERROR)
}

Working with Stream Data Layers

The SDK provides three levels of data access from the SSE stream. Each level serves different use cases, from direct data access to low-level protocol debugging.

Understanding the Three Layers

SSE Server Response:
"data: {\"content\":\"hello\",\"complete\":false}\n"
         ↓
      sseLine (raw SSE line)
         ↓
      rawChunk (JSON string)
         ↓
      chunk (parsed object)

Layer 1: chunk - Parsed Data (Recommended)

What it is: The parsed JSON object from the stream, ready to use.

When to use:

  • ✅ Direct access to message content
  • ✅ Normal application logic
  • ✅ Most use cases (90%+)

Example:

import { chat } from '@autoagents/agent-sdk';

for await (const { messages, chunk } of chat(url, options)) {
  if (chunk) {
    console.log('Content:', chunk.content);
    console.log('Complete:', chunk.complete);
    console.log('Finish:', chunk.finish);
    
    // Access metadata if present
    if (chunk.metadata) {
      console.log('Metadata:', chunk.metadata);
    }
  }
  
  // Update UI with accumulated messages
  updateUI(messages);
}

Layer 2: rawChunk - Raw JSON String

What it is: The JSON string before parsing, without the SSE "data:" prefix.

When to use:

  • ✅ Custom JSON parsing logic
  • ✅ Storing raw responses for replay/debugging
  • ✅ Forwarding to another system that expects JSON strings

Example:

import { chat } from '@autoagents/agent-sdk';

const rawChunks: string[] = [];

for await (const { messages, rawChunk, error } of chat(url, options)) {
  if (rawChunk) {
    // Store raw JSON for debugging
    rawChunks.push(rawChunk);
    
    // Forward to logging service
    logService.logStreamChunk(rawChunk);
    
    // Custom parsing (if needed)
    try {
      const customParsed = JSON.parse(rawChunk);
      processCustomFields(customParsed);
    } catch (err) {
      console.error('Custom parsing failed:', err);
    }
  }
  
  // Handle parse errors
  if (error) {
    console.warn('SDK parse error, but raw data available:', rawChunk);
  }
}

// Save session for replay
saveSession({ chunks: rawChunks });

Layer 3: sseLine - Complete SSE Protocol Line

What it is: The complete Server-Sent Events protocol line, including the "data:" prefix.

When to use:

  • ✅ Protocol-level debugging
  • ✅ SSE proxy/forwarding
  • ✅ Recording exact server responses
  • ✅ Analyzing SSE stream issues

Example:

import { chat } from '@autoagents/agent-sdk';

const sseLog: string[] = [];

for await (const { messages, sseLine, chunk, error } of chat(url, options)) {
  if (sseLine) {
    // Log exact SSE protocol lines
    sseLog.push(`[${new Date().toISOString()}] ${sseLine}`);
    
    // Debug SSE stream
    if (process.env.NODE_ENV === 'development') {
      console.log('SSE:', sseLine);
    }
    
    // Forward to SSE proxy
    if (proxyClient) {
      proxyClient.send(sseLine);
    }
  }
  
  // Detect stream issues
  if (sseLine && !chunk && !error) {
    console.warn('SSE line received but not parsed:', sseLine);
  }
}

// Save complete SSE session for debugging
fs.writeFileSync('sse-debug.log', sseLog.join('\n'));

Practical Use Cases

Use Case 1: Debugging Parse Errors

for await (const { messages, chunk, rawChunk, sseLine, error } of chat(url, options)) {
  if (error) {
    console.error('Parse Error:', error.message);
    console.error('SSE Line:', sseLine);           // See exact server output
    console.error('Raw JSON:', rawChunk);          // See what we tried to parse
    console.error('Failed at:', error.failedInput); // See what failed
    
    // Report to error tracking
    reportError({
      type: 'sse_parse_error',
      sseLine,
      rawChunk,
      error: error.message,
    });
  }
}

Use Case 2: Custom Metrics and Monitoring

let chunkCount = 0;
let totalBytes = 0;
const startTime = Date.now();

for await (const { messages, rawChunk, chunk } of chat(url, options)) {
  if (rawChunk) {
    chunkCount++;
    totalBytes += rawChunk.length;
    
    // Calculate streaming metrics
    const elapsed = Date.now() - startTime;
    const throughput = totalBytes / (elapsed / 1000); // bytes per second
    
    console.log(`Chunks: ${chunkCount}, Throughput: ${throughput.toFixed(2)} B/s`);
  }
  
  if (chunk?.complete) {
    // Message completed, update metrics
    metricsService.recordMessageComplete({
      chunks: chunkCount,
      bytes: totalBytes,
      duration: Date.now() - startTime,
    });
  }
}

Use Case 3: Recording and Replay

// Recording phase
const session = {
  timestamp: Date.now(),
  chunks: [] as string[],
};

for await (const { messages, rawChunk } of chat(url, options)) {
  if (rawChunk) {
    session.chunks.push(rawChunk);
  }
}

saveSession(session);

// Replay phase (simulate streaming from recorded data)
async function* replaySession(session: typeof session) {
  for (const rawChunk of session.chunks) {
    const chunk = JSON.parse(rawChunk);
    yield { chunk, rawChunk };
    await sleep(50); // Simulate network delay
  }
}

Decision Guide

| Need | Use Layer | Reason | |------|-----------|--------| | Display message content | chunk | Direct access, already parsed | | Store for replay | rawChunk | Compact, easy to re-parse | | Debug SSE protocol | sseLine | See exact server output | | Forward to another API | rawChunk | JSON string format | | Calculate stream metrics | rawChunk | Easy byte counting | | Analyze parsing issues | All three | Compare different layers |

Advanced Usage

Lifecycle Callbacks (Advanced Feature)

The SDK provides lifecycle callbacks (onopen, onerror, onclose) for fine-grained control over the stream lifecycle. This is an advanced feature - most applications should use try-catch-finally instead.

Complete Example with All Features

import { chat, StreamCloseCode } from '@autoagents/agent-sdk';

const controller = new AbortController();
let parseErrorCount = 0;

try {
  for await (const { messages, conversationId, error } of chat('https://api.example.com/chat', {
    token: 'your-token',
    body: { agentId: 'agent-123', userChatInput: 'Hello!' },
    signal: controller.signal,
    
    onopen: () => {
      console.log('Connected');
      showLoadingIndicator();
    },
    
    onerror: (err) => {
      // Called for both fatal and non-fatal errors
      console.error('Error:', err.message);
      showErrorToast(err.message);
    },
    
    onclose: ({ code, message, statusCode }) => {
      hideLoadingIndicator();
      
      if (code === StreamCloseCode.HTTP_ERROR && statusCode === 401) {
        redirectToLogin();
      } else if (code === StreamCloseCode.NETWORK_ERROR) {
        showRetryButton();
      } else if (code === StreamCloseCode.UNKNOWN_ERROR) {
        reportCriticalError(message);
      }
    },
  })) {
    // Handle non-fatal parse errors
    if (error) {
      parseErrorCount++;
      console.warn(`Parse error #${parseErrorCount}:`, error.message);
      
      // Optional: Show warning if too many parse errors
      if (parseErrorCount > 5) {
        showWarning('Multiple parse errors detected');
      }
      
      // Continue processing despite error
    }
    
    // Update UI with messages
    updateMessages(messages);
    
    // Save conversation ID for next request
    if (conversationId) {
      saveConversationId(conversationId);
    }
  }
  
  console.log('Stream completed successfully');
  
} catch (fatalError) {
  console.error('Fatal error:', fatalError);
  showErrorDialog('Connection failed');
}

// User can cancel anytime
stopButton.onclick = () => controller.abort();

Detailed Lifecycle Callback Example

Use this pattern when you need to handle different error types differently:

import { chat, StreamCloseCode } from '@autoagents/agent-sdk';

const controller = new AbortController();

try {
  for await (const { messages, error } of chat('https://api.example.com/chat', {
    token: 'your-token',
    body: { agentId: 'agent-123', userChatInput: 'Hello!' },
    signal: controller.signal,
    
    onopen: () => {
      console.log('🟢 Connection established');
      // Show loading indicator
    },
    
    onerror: (error) => {
      console.error('🔴 Error occurred:', error.message);
      // Show error notification (note: stream may continue for non-fatal errors)
    },
    
    onclose: ({ code, message, error, statusCode }) => {
      console.log('⚪ Connection closed:', code, message);
      
      switch (code) {
        case StreamCloseCode.COMPLETED:
          console.log('✅ Stream completed successfully');
          break;
          
        case StreamCloseCode.ABORTED:
          console.log('🚫 User cancelled the request');
          break;
          
        case StreamCloseCode.HTTP_ERROR:
          console.error(`❌ HTTP error: ${statusCode}`);
          if (statusCode === 401) {
            // Redirect to login
          } else if (statusCode === 429) {
            // Show rate limit message
          }
          break;
          
        case StreamCloseCode.NETWORK_ERROR:
          console.error('📡 Network connection failed');
          // Show retry option
          break;
          
        case StreamCloseCode.STREAM_ERROR:
          console.error('⚠️ Stream processing error');
          // Report bug
          break;
          
        case StreamCloseCode.UNKNOWN_ERROR:
          console.error('❓ Unknown error:', error);
          // Report critical error
          break;
      }
    },
  })) {
    // Handle non-fatal parse errors (optional)
    if (error) {
      console.warn('Parse error (stream continues):', error.message);
    }
    
    // Update UI with messages
    console.log('Messages:', messages);
  }
} catch (fatalError) {
  console.error('Fatal error:', fatalError);
}

Error Handling Reference

This section provides detailed documentation for the onclose callback and error codes. For most applications, basic try-catch-finally is sufficient (see Error Handling Strategies).

Stream Close Codes

When a stream closes, the onclose callback receives a detailed status with one of the following codes:

| Code | Description | Fatal | Typical Cause | |------|-------------|-------|---------------| | COMPLETED | Stream completed successfully | No | Normal stream end | | ABORTED | User cancelled the request | Yes | controller.abort() called | | HTTP_ERROR | Server returned non-2xx status | Yes | 401, 404, 500, etc. | | NETWORK_ERROR | Network connection failed | Yes | No internet, DNS failure, timeout | | STREAM_ERROR | Stream processing error | Yes | Invalid stream format, reader error | | UNKNOWN_ERROR | Unrecognized error type | Yes | Unexpected errors that need investigation |

Error Types

1. Fatal Errors (Stream Stops)

These errors will terminate the stream and trigger onclose:

// HTTP errors
{ code: 'HTTP_ERROR', statusCode: 401, message: 'HTTP error: 401 Unauthorized' }

// Network errors
{ code: 'NETWORK_ERROR', message: 'Network connection failed', error: TypeError }

// Stream errors
{ code: 'STREAM_ERROR', message: 'No response body', error: Error }

// User abort
{ code: 'ABORTED', message: 'Stream aborted by user' }

// Unknown errors
{ code: 'UNKNOWN_ERROR', message: 'Unknown error: CustomError: ...', error: Error }

2. Non-Fatal Errors (Stream Continues)

Parse errors occur when a single SSE message cannot be parsed as valid JSON. These errors:

  • Don't stop the stream - Subsequent messages continue to be processed
  • Trigger onerror callback - So you can log/report them
  • Yielded in the result - Available in the error field
for await (const { messages, error } of chatStream) {
  if (error) {
    // This specific message failed to parse, but stream continues
    console.warn('Parse error:', error.message);
    
    // Access the raw data that failed to parse
    console.warn('Failed input:', error.failedInput);
    
    // Optional: Report to error tracking
    reportError(error.cause, {
      context: 'sse_parse_error',
      input: error.failedInput,
    });
  }
  
  // Process messages normally (even if error occurred)
  updateUI(messages);
}

Example scenario:

data: {"content":"Hello"}           ✅ Parsed successfully
data: {"content":"Wor                ❌ Invalid JSON - yields error, stream continues
data: {"content":"ld"}              ✅ Parsed successfully
data: [DONE]                        ✅ Stream completes normally

Error object structure:

{
  message: "Unexpected token in JSON at position 10",
  failedInput: "data: {\"content\":\"Wor",
  cause: SyntaxError // Original error
}

Handling Specific Scenarios

Unauthorized (401)

onclose: ({ code, statusCode }) => {
  if (code === StreamCloseCode.HTTP_ERROR && statusCode === 401) {
    // Token expired, redirect to login
    window.location.href = '/login';
  }
}

Rate Limiting (429)

onclose: ({ code, statusCode }) => {
  if (code === StreamCloseCode.HTTP_ERROR && statusCode === 429) {
    showToast('Too many requests. Please wait and try again.');
  }
}

Network Issues

onclose: ({ code }) => {
  if (code === StreamCloseCode.NETWORK_ERROR) {
    showRetryDialog('Network connection failed. Would you like to retry?');
  }
}

Unknown Errors

onclose: ({ code, message, error }) => {
  if (code === StreamCloseCode.UNKNOWN_ERROR) {
    // Report to error tracking service
    reportError(error, { 
      context: 'chat_stream',
      message,
      level: 'critical',
    });
    
    showToast('An unexpected error occurred. Please contact support.');
  }
}

React Integration Example

import { useState } from 'react';
import { chat, ChatMessage } from '@autoagents/agent-sdk';

function ChatComponent() {
  const [messages, setMessages] = useState<ChatMessage[]>([]);
  const [conversationId, setConversationId] = useState<string>('');
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<string>('');

  const sendMessage = async (input: string) => {
    setIsLoading(true);
    setError('');
    
    try {
      for await (const { messages, conversationId: convId } of chat('https://api.example.com/chat', {
        token: 'your-token',
        body: {
          agentId: 'agent-123',
          userChatInput: input,
          chatId: conversationId,
        },
      })) {
        setMessages(messages);
        if (convId) setConversationId(convId);
      }
    } catch (err) {
      setError(err instanceof Error ? err.message : 'An error occurred');
    } finally {
      setIsLoading(false);
    }
  };

  return (
    <div>
      {error && <div className="error">{error}</div>}
      {messages.map((msg) => (
        <div key={msg.messageId}>
          <strong>{msg.role}:</strong> {msg.content}
          {msg.loading && <span>...</span>}
        </div>
      ))}
      {isLoading && <div>Loading...</div>}
    </div>
  );
}

Low-Level APIs

createChatSSEStream(url, options)

Creates a ReadableStream for SSE data.

import { createChatSSEStream } from '@autoagents/agent-sdk';

const stream = await createChatSSEStream('https://api.example.com/chat', {
  token: 'your-token',
  body: { agentId: 'agent-123', userChatInput: 'Hello!' },
});

createChatMessageStream(stream)

Processes a ReadableStream and yields structured messages.

import { createChatSSEStream, createChatMessageStream } from '@autoagents/agent-sdk';

const stream = await createChatSSEStream(url, options);
for await (const result of createChatMessageStream(stream)) {
  console.log(result.messages);
}

Protocol

The SDK expects Server-Sent Events (SSE) in the following format:

data: {"chatId":"123","conversationId":"456","content":"Hello","complete":false,"finish":false,...}
data: {"chatId":"123","conversationId":"456","content":" World","complete":true,"finish":false,...}
data: [DONE]

Stream Markers

  • complete: true - Current message is complete (may have more messages in conversation)
  • finish: true - Entire conversation stream is finished
  • data: [DONE] - Alternative stream termination marker

Browser Support

This package uses native browser APIs:

  • fetch API
  • ReadableStream API
  • TextDecoder API
  • AsyncGenerator support

Requires modern browsers with ES2022 support. For older browsers, use appropriate polyfills.

Publishing

To publish this package to npm:

# First time: Login to npm
npm run login

# Then: One-command publish
npm run publish:now

License

Apache-2.0

Contributing

Contributions are welcome! Please read our contributing guidelines and code of conduct.

Support