npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@ts-dspy/openai

v0.4.2

Published

OpenAI ChatGPT integration for TS-DSPy - enables type-safe LLM interactions with GPT-3.5, GPT-4, and other OpenAI models for TypeScript

Readme

@ts-dspy/openai

npm version License: MIT

OpenAI ChatGPT integration for TS-DSPy - enables type-safe LLM interactions with GPT-3.5, GPT-4, and other OpenAI models.

This package provides seamless integration between TS-DSPy and OpenAI's language models, allowing you to build powerful, type-safe applications with GPT models.

🚀 Features

  • Full OpenAI Support: Works with GPT-4, GPT-3.5-turbo, and other OpenAI models
  • Type-Safe Integration: Fully compatible with TS-DSPy signatures and modules
  • Usage Tracking: Built-in token usage and cost tracking
  • Streaming Support: Stream responses for real-time applications
  • Error Handling: Robust error handling with retry mechanisms
  • Flexible Configuration: Support for all OpenAI parameters

📦 Installation

npm install @ts-dspy/openai @ts-dspy/core

# Install ts-node for proper execution (recommended)
npm install -g ts-node

⚠️ Important: Use ts-node to run TypeScript files directly. Transpiling to JavaScript may cause issues with decorators and type information.

# Run your scripts with ts-node
npx ts-node your-script.ts

# Or install globally and use directly
npm install -g ts-node
ts-node your-script.ts

🔑 Setup

Get your OpenAI API key from OpenAI Platform.

import { OpenAILM } from '@ts-dspy/openai';

const lm = new OpenAILM({
  apiKey: process.env.OPENAI_API_KEY, // Your OpenAI API key
  model: 'gpt-4', // or 'gpt-3.5-turbo', 'gpt-4-turbo', etc.
});

🎯 Quick Start

Basic Usage

import { Signature, InputField, OutputField, Predict, configure } from '@ts-dspy/core';
import { OpenAILM } from '@ts-dspy/openai';

// Define your signature
class Translator extends Signature {
  static description = "Translate text between languages";

  @InputField({ description: "Text to translate" })
  text!: string;

  @InputField({ description: "Target language" })
  target_language!: string;

  @OutputField({ description: "Translated text" })
  translation!: string;

  @OutputField({ description: "Confidence score from 0-1", type: "number" })
  confidence!: number;
}

// Setup OpenAI model
configure({
    lm: new OpenAILM({
        apiKey: process.env.OPENAI_API_KEY,
        model: 'gpt-4'
    })
});

// Create and use predictor
const translator = new Predict(Translator);

const result = await translator.forward({
  text: "Hello, how are you?",
  target_language: "Spanish"
});

console.log(result.translation); // "Hola, ¿cómo estás?"
console.log(result.confidence); // 0.95

With Global Configuration

import { configure } from '@ts-dspy/core';
import { OpenAILM } from '@ts-dspy/openai';

// Configure globally
configure({
  lm: new OpenAILM({
    apiKey: process.env.OPENAI_API_KEY,
    model: 'gpt-4-turbo'
  })
});

// Now you can use modules without passing the language model
const predictor = new Predict("question -> answer");

⚙️ Configuration Options

Model Configuration

const lm = new OpenAILM({
  // Required
  apiKey: 'your-api-key',
  
  // Model selection
  model: 'gpt-4', // gpt-4, gpt-4-turbo, gpt-3.5-turbo, etc.
  
  // Generation parameters
  temperature: 0.7, // 0-2, controls randomness
  maxTokens: 1000, // Maximum tokens to generate
  topP: 1.0, // Nucleus sampling parameter
  frequencyPenalty: 0, // -2 to 2, penalize frequent tokens
  presencePenalty: 0, // -2 to 2, penalize present tokens
  
  // Advanced options
  timeout: 30000, // Request timeout in milliseconds
  maxRetries: 3, // Number of retry attempts
  organization: 'org-id', // OpenAI organization ID (optional)
  
  // Custom base URL (for proxies, etc.)
  baseURL: 'https://api.openai.com/v1'
});

Supported Models

  • GPT-4: gpt-4, gpt-4-turbo, gpt-4-turbo-preview
  • GPT-3.5: gpt-3.5-turbo, gpt-3.5-turbo-16k
  • Legacy: gpt-4-0613, gpt-3.5-turbo-0613, etc.

📊 Usage Tracking

Track token usage and costs:

const lm = new OpenAILM({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4'
});

// Make some predictions
const predictor = new Predict("question -> answer", lm);
await predictor.forward({ question: "What is AI?" });

// Get usage statistics
const usage = lm.getUsage();
console.log(`Tokens: ${usage.totalTokens}`);
console.log(`Cost: $${usage.totalCost}`);
console.log(`Requests: ${usage.requestCount}`);

🔄 Streaming Responses

For real-time applications:

const lm = new OpenAILM({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4',
  stream: true
});

// Stream responses
const stream = await lm.generateStream("Tell me a story about AI");

for await (const chunk of stream) {
  process.stdout.write(chunk);
}

🛡️ Error Handling

The OpenAI integration includes robust error handling:

import { OpenAIError, RateLimitError, AuthenticationError } from '@ts-dspy/openai';

try {
  const result = await predictor.forward({ question: "What is life?" });
} catch (error) {
  if (error instanceof RateLimitError) {
    console.log('Rate limit hit, retrying...');
    // Handle rate limiting
  } else if (error instanceof AuthenticationError) {
    console.log('Invalid API key');
    // Handle auth issues
  } else if (error instanceof OpenAIError) {
    console.log('OpenAI API error:', error.message);
    // Handle other OpenAI errors
  }
}

🎨 Advanced Usage

ReAct Pattern with Tools

import { RespAct } from '@ts-dspy/core';

const agent = new RespAct("question -> answer", {
    tools: {
        calculate: {
            description: "Performs mathematical calculations. Use for any arithmetic operations.",
            function: (expr: string) => eval(expr)
        },
        search: {
            description: "Searches for information online. Use when you need current or factual information.",
            function: async (query: string) => await searchWeb(query)
        }
    },
    maxSteps: 5
});

const result = await agent.forward({
    question: "What's the square root of 144 plus the current population of Tokyo?"
});

Chain of Thought Reasoning

import { ChainOfThought } from '@ts-dspy/core';

const reasoner = new ChainOfThought("problem -> solution: int");
const result = await reasoner.forward({
    problem: "If I have 3 apples and buy 5 more, then eat 2, how many do I have?"
});

console.log(result.reasoning); // Step-by-step explanation
console.log(result.solution);  // 6

Custom System Messages

class CustomSignature extends Signature {
  static description = "You are a helpful AI assistant specialized in code review";
  
  @InputField({ description: "Code to review" })
  code!: string;
  
  @OutputField({ description: "Review feedback" })
  feedback!: string;
}

Function Calling (Tools)

const lm = new OpenAILM({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4',
  tools: [
    {
      type: 'function',
      function: {
        name: 'get_weather',
        description: 'Get current weather',
        parameters: {
          type: 'object',
          properties: {
            location: { type: 'string' }
          }
        }
      }
    }
  ]
});

Multiple Models

const gpt4 = new OpenAILM({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4'
});

const gpt35 = new OpenAILM({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-3.5-turbo'
});

// Use different models for different tasks
const complexReasoning = new Predict(ComplexSignature, gpt4);
const simpleTask = new Predict(SimpleSignature, gpt35);

🔧 Environment Variables

Create a .env file:

OPENAI_API_KEY=your_openai_api_key_here
OPENAI_ORG_ID=your_org_id_here  # Optional

💰 Cost Optimization

Tips for managing costs:

// Use cheaper models for simple tasks
const simpleLM = new OpenAILM({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-3.5-turbo', // Cheaper than GPT-4
  maxTokens: 100 // Limit response length
});

// Use higher temperature for creative tasks, lower for factual
const creativeLM = new OpenAILM({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4',
  temperature: 0.9 // More creative
});

const factualLM = new OpenAILM({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4',
  temperature: 0.1 // More deterministic
});

🧪 Testing

import { MockOpenAILM } from '@ts-dspy/openai/testing';

// Use mock for testing
const mockLM = new MockOpenAILM();
mockLM.setResponse("Mocked response");

const predictor = new Predict("question -> answer", mockLM);
const result = await predictor.forward({ question: "Test?" });
console.log(result.answer); // "Mocked response"

📚 API Reference

OpenAILM

Main class for OpenAI integration.

Constructor Options:

  • apiKey: OpenAI API key (required)
  • model: Model name (required)
  • temperature: Sampling temperature (0-2)
  • maxTokens: Maximum tokens to generate
  • topP: Nucleus sampling parameter
  • frequencyPenalty: Frequency penalty (-2 to 2)
  • presencePenalty: Presence penalty (-2 to 2)
  • timeout: Request timeout in ms
  • maxRetries: Number of retry attempts

Methods:

  • generate(prompt: string): Promise<string> - Generate text
  • generateStream(prompt: string): AsyncIterable<string> - Stream generation
  • getUsage(): UsageStats - Get usage statistics

🤝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

📄 License

MIT License - see the LICENSE file for details.

🔗 Related Packages

  • @ts-dspy/core - Core TS-DSPy library
  • More integrations coming soon!

📖 Learn More