npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

oaib

v1.1.0

Published

**OpenAI batching library** - For batch processing of LLM API requests with concurrency control and progress tracking.

Readme

oaib

OpenAI batching library - For batch processing of LLM API requests with concurrency control and progress tracking.

Installation

bun add oaib

We recommend using Bun for the best performance and compatibility.

Usage

import { batch } from "oaib"
import { openai } from "@ai-sdk/openai"

// Generate random strings of different lengths
const randomStrings = [60, 100, 200, 500, 1000]
  .flatMap(n => Array.from({ length: 10 }, () => n))
  .map(n => "x".repeat(n * (1 + Math.random() * 0.1)))

const conversations = randomStrings.map((string) => ({
  // Add columns to results
  data: { solution: string.length },
  messages: [
    { 
      role: "user", 
      content: `Count characters in this string. Answer with <count>{number}</count>.\n${string}` 
    }
  ]
}))

const results = await batch(conversations, {
  model: openai("gpt-4"),
  concurrency: 8,
  process: ({ text }) => {
    // Maps model response to { result }
    const match = text.match(/<count>(.*?)<\/count>/)
    if (!match?.[1]) throw new Error("No <count> tags found")
    return parseInt(match[1], 10)
  },
})

// results.results contains { input, response, result, ...data }
await Bun.write("results.json", JSON.stringify(results, null, 2))

See the full example in src/letter-count.ts.

Requirements

  • TypeScript 5+ required
  • Bun recommended for optimal performance
  • For Node.js users: Handle TypeScript transpilation on your end*

*This is a pure TypeScript repository optimized for Bun. Node.js users will need to set up their own transpilation pipeline (e.g., ts-node, tsx, or build step).

Quick Start

import { batch } from 'oaib'
import { openai } from '@ai-sdk/openai'

const conversations = [
  {
    messages: [{ role: 'user', content: 'What is 2+2?' }],
    data: { id: 1 }
  },
  {
    messages: [{ role: 'user', content: 'What is 3+3?' }],
    data: { id: 2 }
  }
]

const results = await batch(conversations, {
  model: openai('gpt-4'),
  process: ({ text }) => {
    // Process the AI response
    return text.trim()
  },
  concurrency: 5
})

console.log(results)

API Reference

batch<TOOLS, PROCESSED>(items, options)

Process multiple AI conversations concurrently with built-in progress tracking.

Parameters

  • items (BatchItem[]) - Array of conversation items to process
  • options (BatchOptions<TOOLS, PROCESSED>) - Configuration options

BatchItem

type BatchItem = {
  messages: ModelMessage[]     // Conversation messages
  data?: Record<string, unknown>  // Optional metadata
}

BatchOptions<TOOLS, PROCESSED>

interface BatchOptions<TOOLS, PROCESSED> {
  model: LanguageModel              // AI model to use
  tools?: TOOLS                     // Available tools for the model
  process: (response) => PROCESSED  // Function to process AI responses
  concurrency?: number              // Max concurrent requests (default: 8)
  timeout?: number                  // Timeout per request in ms
  spinner?: boolean                 // Show progress spinner (default: true)
}

Features

  • Concurrent Processing: Control the number of simultaneous AI requests
  • Progress Tracking: Built-in spinner showing real-time progress
  • Error Handling: Graceful handling of failed requests with error reporting
  • Flexible Processing: Custom processing functions for AI responses
  • Type Safety: Full TypeScript support with generic types
  • AI SDK Integration: Works with any AI SDK provider (OpenAI, Anthropic, etc.)

Examples

Basic Usage

import { batch } from 'oaib'
import { openai } from '@ai-sdk/openai'

const conversations = [
  { messages: [{ role: 'user', content: 'Translate "hello" to Spanish' }] },
  { messages: [{ role: 'user', content: 'Translate "goodbye" to French' }] }
]

const results = await batch(conversations, {
  model: openai('gpt-4'),
  process: ({ text }) => text
})

With Custom Processing

const results = await batch(conversations, {
  model: openai('gpt-4'),
  process: ({ text }) => {
    // Extract structured data from AI response
    const match = text.match(/<answer>(.*?)<\/answer>/)
    return match?.[1] || text
  },
  concurrency: 3,
  timeout: 30000
})

With Metadata

const conversations = [
  {
    messages: [{ role: 'user', content: 'Count characters: "hello"' }],
    data: { expectedLength: 5, id: 'test1' }
  }
]

const results = await batch(conversations, {
  model: openai('gpt-4'),
  process: ({ text }) => parseInt(text.match(/\d+/)?.[0] || '0')
})

// Access metadata in results
results.results.forEach(result => {
  console.log(result.id, result.expectedLength, result.result)
})

Disable Progress Spinner

const results = await batch(conversations, {
  model: openai('gpt-4'),
  process: ({ text }) => text,
  spinner: false  // No progress indicator
})

Development

This project uses Bun for development:

# Install dependencies
bun install

# Run tests
bun test

# Run the example
bun src/letter-count.ts

Requirements

  • TypeScript 5+ required
  • Bun recommended for optimal performance
  • For Node.js users: Handle TypeScript transpilation on your end*

*This is a pure TypeScript repository optimized for Bun. Node.js users will need to set up their own transpilation pipeline (e.g., ts-node, tsx, or build step).