npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@howlabs/openstream

v0.1.0-alpha.2

Published

OpenAI-first streaming library for TypeScript with Responses API support, edge-friendly runtime support, and cost tracking.

Downloads

77

Readme

openstream

OpenAI-first streaming library for TypeScript. Optimized for the OpenAI Responses API with edge-runtime support.

Current Status

@howlabs/openstream is currently an alpha-stage library with:

  • a fluent builder API
  • a streaming engine based on async iterables
  • an OpenAI provider
  • typed errors
  • basic cost tracking and budget middleware
  • an edge-friendly entry point

The following ideas are planned, but are not implemented today:

  • retry middleware with exponential backoff
  • request/response logging middleware
  • rate limit handling
  • comprehensive examples and templates

Multi-provider support, offline sync, and CRDT-based features are not part of the current roadmap.

Why @howlabs/openstream?

import { stream, openai } from '@howlabs/openstream'

const result = await stream('Tell me a joke')
  .using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
  .model('gpt-4o-mini')
  .run()

console.log(result.content)

Available Now

| Feature | Status | Notes | |---------|--------|-------| | Fluent builder API | Available | Chainable configuration and execution | | Streaming engine | Available | run() and stream() APIs | | OpenAI provider | Available | Responses API with SSE event parsing | | Typed errors | Available | Structured error classes and helpers | | Cost tracking | Available | Cost estimation, tracker, budget middleware | | Edge entry point | Available | @howlabs/openstream/edge export |

Installation

npm install @howlabs/openstream

npm Registry Configuration

Create a project or user .npmrc entry for the @howlabs scope:

@howlabs:registry=https://npm.pkg.github.com

This tells npm to install @howlabs/* packages from GitHub Packages instead of the public npm registry.

Edge usage

The edge-safe entry point is exported as:

import { stream, openai } from '@howlabs/openstream/edge'

It keeps the same builder/provider API while avoiding Node-only dependencies.

Quick Start

Basic usage

import { stream, openai } from '@howlabs/openstream'

const result = await stream('Hello, AI!')
  .using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
  .run()

console.log(result.content)

Process chunks as they arrive

import { stream, openai } from '@howlabs/openstream'

const builder = stream('Write a short poem')
  .using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
  .model('gpt-4o-mini')

for await (const chunk of builder.stream()) {
  process.stdout.write(chunk.content)
}

Track estimated cost

import { stream, openai, tracker } from '@howlabs/openstream'

const costs = tracker()

const result = await stream('Generate a short summary')
  .using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
  .with(costs.middleware())
  .run()

console.log(result.content)
console.log(costs.summary())

Use the edge entry point

import { stream, openai } from '@howlabs/openstream/edge'

export default {
  async fetch(request: Request) {
    const result = await stream(await request.text())
      .using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
      .model('gpt-4o-mini')
      .run()

    return new Response(result.content, {
      headers: { 'Content-Type': 'text/plain' },
    })
  },
}

API Summary

OpenAI support today

The built-in OpenAI provider uses the OpenAI Responses API internally and adapts streamed response events into Chunk objects.

Current behavior:

  • system prompts are forwarded as Responses API instructions
  • user and assistant history are forwarded as input
  • text deltas are exposed as chunk.content
  • final metadata is exposed on the terminal chunk when available
  • terminal metadata may include inputTokens, outputTokens, totalTokens, responseId, requestId, responseStatus, and incompleteReason
  • run() preserves the aggregated terminal metadata on result.metadata

This keeps the public API small while following OpenAI's current API direction.

OpenAI raw event access

If you need the original Responses API event stream, the OpenAI provider also exposes rawEvents():

import { openai } from '@howlabs/openstream'

const provider = openai({ apiKey: process.env.OPENAI_API_KEY! })

for await (const event of provider.rawEvents(
  [{ role: 'user', content: 'Hello' }],
  { model: 'gpt-4o-mini' }
)) {
  console.log(event.type)
}

Use stream() when you want normalized text chunks. Use rawEvents() when you need provider-specific event detail.

stream(prompt)

Creates a new streaming request.

stream(prompt: string | Message[]): StreamBuilder

StreamBuilder

| Method | Params | Return | |--------|--------|--------| | .model(name) | string | this | | .using(provider) | Provider | this | | .with(middleware) | Middleware | this | | .maxTokens(tokens) | number | this | | .temperature(value) | number | this | | .topP(value) | number | this | | .stop(sequences) | readonly string[] | this | | .timeout(ms) | number | this | | .system(prompt) | string | this | | .onChunk(fn) | (chunk: Chunk) => void | this | | .onComplete(fn) | (result: StreamResult) => void | this | | .onError(fn) | (error: Error) => void | this | | .run() | - | Promise<StreamResult> | | .stream() | - | AsyncGenerator<Chunk, StreamResult> |

StreamResult

interface StreamResult {
  content: string
  tokens: TokenUsage
  metadata?: ChunkMetadata
  latency: number
  provider: string
  model: string
}

For OpenAI, metadata may include request and response identifiers plus final response status details.

Exports available today

import {
  stream,
  openai,
  tracker,
  budget,
  StreamError,
} from '@howlabs/openstream'

Requirements

  • Node.js >= 18
  • TypeScript >= 5.0 if you use TypeScript directly
  • A runtime with modern Web APIs for fetch-based streaming

Development

Available scripts from package.json:

npm run build
npm run test
npm run test:watch
npm run test:coverage
npm run typecheck

Fallback token accounting uses model-aware message estimation for supported OpenAI model families and falls back to a coarse legacy heuristic for unknown models. Exact usage from providers still wins whenever the provider returns it.

Roadmap Direction

Near-term roadmap:

  • stabilize the OpenAI-first core
  • improve token accounting and result metadata
  • add one more provider to validate the abstraction
  • improve docs and release readiness

See ROADMAP.md for the current scoped roadmap.

License

MIT © 2026

Links