npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@tensorchat.io/streaming

v1.0.10

Published

Framework-agnostic Tensorchat.io streaming client for concurrent LLM prompting

Readme

Tensorchat Streaming JavaScript Client

NPM version Node.js 14+ License: MIT

Framework-agnostic TypeScript/JavaScript client for Tensorchat.io streaming API. Process multiple LLM prompts concurrently with real-time streaming responses and ultra-fast performance optimizations.

Features

  • Ultra-Fast Streaming: Zero throttling delays with optimized buffer processing
  • Framework Agnostic: Works with vanilla JS, React, Vue, Angular, Svelte, or any framework
  • Real-time UI Updates: Live streaming updates for responsive user interfaces
  • Concurrent Processing: Handle multiple prompts simultaneously with intelligent buffering
  • TypeScript Support: Fully typed for better developer experience
  • Search Integration: Track search progress and completion for enhanced UX
  • Memory Efficient: Automatic buffer cleanup and minimal memory footprint
  • Guaranteed Callbacks: Reliable completion callbacks with complete data

Installation

npm install @tensorchat.io/streaming

Quick Start

import { TensorchatStreaming } from "@tensorchat.io/streaming";

const client = new TensorchatStreaming({
  apiKey: "your-api-key",
  baseUrl: "https://api.tensorchat.ai", // optional
  verbose: false // optional, default false
});

const request = {
  context: "Analyze the following data",
  model: "gpt-4",
  tensors: [
    { messages: "Summarize key trends", search: true },
    { messages: "Extract insights", search: false },
    { messages: "Generate recommendations", search: true }
  ]
};

await client.streamProcess(request, {
  onSearchProgress: (data) => {
    console.log(`Searching for tensor ${data.index}...`);
  },
  
  onSearchComplete: (data) => {
    console.log(`Search complete for tensor ${data.index}`);
  },
  
  onTensorChunk: (data) => {
    // Real-time UI updates - called for each chunk
    console.log(`Tensor ${data.index}: ${data.chunk}`);
    updateUI(data.index, data.chunk);
  },
  
  onTensorComplete: (data) => {
    // Final callback with complete data
    console.log(`Tensor ${data.index} complete`);
    console.log(`Content: ${data.result.content}`);
    console.log(`Total chunks: ${data.streamBuffers.length}`);
  },
  
  onComplete: (data) => {
    console.log("All tensors processed");
  },
  
  onError: (error) => {
    console.error("Processing error:", error);
  }
});

// Clean up when done
client.destroy();

Advanced Usage

React Integration

import React, { useState, useEffect, useRef } from 'react';
import { TensorchatStreaming } from '@tensorchat.io/streaming';

function StreamingComponent() {
  const [tensorContents, setTensorContents] = useState({});
  const [searchStatuses, setSearchStatuses] = useState({});
  const [isProcessing, setIsProcessing] = useState(false);
  const clientRef = useRef(null);

  useEffect(() => {
    clientRef.current = new TensorchatStreaming({
      apiKey: process.env.REACT_APP_TENSORCHAT_API_KEY,
      verbose: true
    });

    return () => clientRef.current?.destroy();
  }, []);

  const processData = async () => {
    setIsProcessing(true);
    setTensorContents({});
    setSearchStatuses({});

    const request = {
      context: "Market analysis context",
      model: "gpt-4",
      tensors: [
        { messages: "Analyze crypto trends", search: true },
        { messages: "Stock market overview", search: true },
        { messages: "Economic predictions", search: false }
      ]
    };

    try {
      await clientRef.current.streamProcess(request, {
        onSearchProgress: (data) => {
          setSearchStatuses(prev => ({
            ...prev,
            [data.index]: 'searching'
          }));
        },

        onSearchComplete: (data) => {
          setSearchStatuses(prev => ({
            ...prev,
            [data.index]: 'complete'
          }));
        },

        onTensorChunk: (data) => {
          setTensorContents(prev => ({
            ...prev,
            [data.index]: (prev[data.index] || '') + data.chunk
          }));
        },

        onTensorComplete: (data) => {
          // Final validation and processing
          console.log(`Tensor ${data.index} final content:`, data.result.content);
        },

        onComplete: () => {
          setIsProcessing(false);
        },

        onError: (error) => {
          console.error('Streaming error:', error);
          setIsProcessing(false);
        }
      });
    } catch (error) {
      console.error('Request failed:', error);
      setIsProcessing(false);
    }
  };

  return (
    <div>
      <button onClick={processData} disabled={isProcessing}>
        {isProcessing ? 'Processing...' : 'Start Analysis'}
      </button>

      {Object.entries(tensorContents).map(([index, content]) => (
        <div key={index} style={{ margin: '20px 0', padding: '15px', border: '1px solid #ccc' }}>
          <h3>
            Tensor {index}
            {searchStatuses[index] === 'searching' && ' (Searching...)'}
            {searchStatuses[index] === 'complete' && ' (Search Complete)'}
          </h3>
          <div style={{ whiteSpace: 'pre-wrap' }}>{content}</div>
        </div>
      ))}
    </div>
  );
}

export default StreamingComponent;

Framework Manager (Alternative)

For more complex applications, use the framework manager:

import { createTensorchatStreaming } from '@tensorchat.io/streaming';

const manager = createTensorchatStreaming({
  apiKey: 'your-api-key',
  verbose: true
});

// Use the manager
await manager.streamProcess(request, callbacks);

// Update configuration
manager.updateConfig({ verbose: false });

// Clean up
manager.destroy();

API Reference

TensorchatStreaming Class

Constructor Options

interface TensorchatConfig {
  apiKey: string;           // Your Tensorchat API key (required)
  baseUrl?: string;         // API endpoint (default: 'https://api.tensorchat.ai')
  verbose?: boolean;        // Enable debug logging (default: false)
}

Methods

streamProcess(request, callbacks)

Process tensors with real-time streaming.

Parameters:

  • request: StreamRequest object
  • callbacks: StreamCallbacks object

Returns: Promise

destroy()

Clean up resources and clear buffers.

Stream Request Format

interface StreamRequest {
  context: string;          // Context for all tensors
  model: string;            // LLM model to use
  tensors: TensorConfig[];  // Array of tensor configurations
}

interface TensorConfig {
  messages: string;         // The prompt/message
  concise?: boolean;        // Request concise response
  model?: string;           // Override model for this tensor
  search?: boolean;         // Enable search for this tensor
}

Stream Callbacks

interface StreamCallbacks {
  onSearchProgress?: (data: StreamEventData) => void;     // Search in progress
  onSearchComplete?: (data: StreamEventData) => void;     // Search completed
  onTensorChunk?: (data: StreamEventData) => void;        // Streaming content chunk
  onTensorComplete?: (data: StreamEventData) => void;     // Tensor complete with final data
  onComplete?: (data: StreamEventData) => void;           // All tensors complete
  onError?: (error: Error) => void;                       // Error handling
}

Stream Event Data

interface StreamEventData {
  type: string;             // Event type
  index?: number;           // Tensor index
  chunk?: string;           // Content chunk (for tensor_chunk events)
  result?: {
    content: string;        // Complete content (for tensor_complete events)
    // ... other result properties
  };
  streamBuffers?: string[]; // All chunks for this tensor (tensor_complete only)
  error?: string;           // Error message (for error events)
  totalTensors?: number;    // Total number of tensors (for complete events)
}

Performance Optimizations

The client includes several performance optimizations for maximum throughput:

  • Zero Throttling: No artificial delays in chunk processing
  • Optimized Buffer Processing: Uses indexOf with start position for efficient string parsing
  • Memory Efficient: Automatic buffer cleanup after tensor completion
  • Minimal String Operations: Single join operation per tensor for final content
  • Smart Callback Management: Guaranteed callback ordering with onTensorComplete as final call

Error Handling

The client provides comprehensive error handling:

await client.streamProcess(request, {
  onError: (error) => {
    console.error('Streaming error:', error.message);
    // Handle different error types
    if (error.message.includes('HTTP 401')) {
      // Handle authentication error
    } else if (error.message.includes('HTTP 429')) {
      // Handle rate limiting
    }
  }
});

Best Practices

  1. Always call destroy() when done to clean up resources
  2. Use onTensorChunk for real-time UI updates
  3. Use onTensorComplete for final processing and validation
  4. Enable search callbacks when using search functionality
  5. Handle errors gracefully with proper error callbacks
  6. Set verbose: true during development for debugging

Browser Compatibility

  • Modern browsers with fetch API support
  • Node.js 14+
  • TypeScript 4.0+

Links & Resources

  • NPM Package: https://www.npmjs.com/package/@tensorchat.io/streaming
  • GitHub Repository: https://github.com/datacorridor/tensorchat-streaming
  • Tensorchat Platform: https://tensorchat.io
  • API Documentation: https://tensorchat.io/#api-docs

License

MIT License - see LICENSE file for details.

Support

Tensorchat.io is a product of Data Corridor Limited