@howlabs/openstream
v0.1.0-alpha.2
Published
OpenAI-first streaming library for TypeScript with Responses API support, edge-friendly runtime support, and cost tracking.
Downloads
77
Maintainers
Readme
openstream
OpenAI-first streaming library for TypeScript. Optimized for the OpenAI Responses API with edge-runtime support.
Current Status
@howlabs/openstream is currently an alpha-stage library with:
- a fluent builder API
- a streaming engine based on async iterables
- an OpenAI provider
- typed errors
- basic cost tracking and budget middleware
- an edge-friendly entry point
The following ideas are planned, but are not implemented today:
- retry middleware with exponential backoff
- request/response logging middleware
- rate limit handling
- comprehensive examples and templates
Multi-provider support, offline sync, and CRDT-based features are not part of the current roadmap.
Why @howlabs/openstream?
import { stream, openai } from '@howlabs/openstream'
const result = await stream('Tell me a joke')
.using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
.model('gpt-4o-mini')
.run()
console.log(result.content)Available Now
| Feature | Status | Notes |
|---------|--------|-------|
| Fluent builder API | Available | Chainable configuration and execution |
| Streaming engine | Available | run() and stream() APIs |
| OpenAI provider | Available | Responses API with SSE event parsing |
| Typed errors | Available | Structured error classes and helpers |
| Cost tracking | Available | Cost estimation, tracker, budget middleware |
| Edge entry point | Available | @howlabs/openstream/edge export |
Installation
npm install @howlabs/openstreamnpm Registry Configuration
Create a project or user .npmrc entry for the @howlabs scope:
@howlabs:registry=https://npm.pkg.github.comThis tells npm to install @howlabs/* packages from GitHub Packages instead of the public npm registry.
Edge usage
The edge-safe entry point is exported as:
import { stream, openai } from '@howlabs/openstream/edge'It keeps the same builder/provider API while avoiding Node-only dependencies.
Quick Start
Basic usage
import { stream, openai } from '@howlabs/openstream'
const result = await stream('Hello, AI!')
.using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
.run()
console.log(result.content)Process chunks as they arrive
import { stream, openai } from '@howlabs/openstream'
const builder = stream('Write a short poem')
.using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
.model('gpt-4o-mini')
for await (const chunk of builder.stream()) {
process.stdout.write(chunk.content)
}Track estimated cost
import { stream, openai, tracker } from '@howlabs/openstream'
const costs = tracker()
const result = await stream('Generate a short summary')
.using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
.with(costs.middleware())
.run()
console.log(result.content)
console.log(costs.summary())Use the edge entry point
import { stream, openai } from '@howlabs/openstream/edge'
export default {
async fetch(request: Request) {
const result = await stream(await request.text())
.using(openai({ apiKey: process.env.OPENAI_API_KEY! }))
.model('gpt-4o-mini')
.run()
return new Response(result.content, {
headers: { 'Content-Type': 'text/plain' },
})
},
}API Summary
OpenAI support today
The built-in OpenAI provider uses the OpenAI Responses API internally and
adapts streamed response events into Chunk objects.
Current behavior:
systemprompts are forwarded as Responses APIinstructions- user and assistant history are forwarded as
input - text deltas are exposed as
chunk.content - final metadata is exposed on the terminal chunk when available
- terminal metadata may include
inputTokens,outputTokens,totalTokens,responseId,requestId,responseStatus, andincompleteReason run()preserves the aggregated terminal metadata onresult.metadata
This keeps the public API small while following OpenAI's current API direction.
OpenAI raw event access
If you need the original Responses API event stream, the OpenAI provider also
exposes rawEvents():
import { openai } from '@howlabs/openstream'
const provider = openai({ apiKey: process.env.OPENAI_API_KEY! })
for await (const event of provider.rawEvents(
[{ role: 'user', content: 'Hello' }],
{ model: 'gpt-4o-mini' }
)) {
console.log(event.type)
}Use stream() when you want normalized text chunks.
Use rawEvents() when you need provider-specific event detail.
stream(prompt)
Creates a new streaming request.
stream(prompt: string | Message[]): StreamBuilderStreamBuilder
| Method | Params | Return |
|--------|--------|--------|
| .model(name) | string | this |
| .using(provider) | Provider | this |
| .with(middleware) | Middleware | this |
| .maxTokens(tokens) | number | this |
| .temperature(value) | number | this |
| .topP(value) | number | this |
| .stop(sequences) | readonly string[] | this |
| .timeout(ms) | number | this |
| .system(prompt) | string | this |
| .onChunk(fn) | (chunk: Chunk) => void | this |
| .onComplete(fn) | (result: StreamResult) => void | this |
| .onError(fn) | (error: Error) => void | this |
| .run() | - | Promise<StreamResult> |
| .stream() | - | AsyncGenerator<Chunk, StreamResult> |
StreamResult
interface StreamResult {
content: string
tokens: TokenUsage
metadata?: ChunkMetadata
latency: number
provider: string
model: string
}For OpenAI, metadata may include request and response identifiers plus final
response status details.
Exports available today
import {
stream,
openai,
tracker,
budget,
StreamError,
} from '@howlabs/openstream'Requirements
- Node.js >= 18
- TypeScript >= 5.0 if you use TypeScript directly
- A runtime with modern Web APIs for fetch-based streaming
Development
Available scripts from package.json:
npm run build
npm run test
npm run test:watch
npm run test:coverage
npm run typecheckFallback token accounting uses model-aware message estimation for supported OpenAI model families and falls back to a coarse legacy heuristic for unknown models. Exact usage from providers still wins whenever the provider returns it.
Roadmap Direction
Near-term roadmap:
- stabilize the OpenAI-first core
- improve token accounting and result metadata
- add one more provider to validate the abstraction
- improve docs and release readiness
See ROADMAP.md for the current scoped roadmap.
License
MIT © 2026
