npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@robota-sdk/openai

v2.0.9

Published

OpenAI integration for Robota SDK - GPT-4, GPT-3.5, function calling, and tool integration with OpenAI's API

Readme

@robota-sdk/openai

OpenAI Provider for Robota SDK - Complete type-safe integration with OpenAI's GPT models, featuring function calling, streaming, and advanced AI capabilities.

🚀 Features

Core Capabilities

  • 🎯 Type-Safe Integration: Complete TypeScript support with zero any types
  • 🤖 GPT Model Support: GPT-4, GPT-3.5 Turbo, and all OpenAI models
  • ⚡ Real-Time Streaming: Asynchronous streaming responses with proper error handling
  • 🛠️ Function Calling: Native OpenAI function calling with type validation
  • 🔄 Provider-Agnostic Design: Seamless integration with other Robota providers
  • 📊 Payload Logging: Optional API request/response logging for debugging

Architecture Highlights

  • Generic Type Parameters: Full BaseAIProvider<TConfig, TMessage, TResponse> implementation
  • Facade Pattern: Modular design with separated concerns
  • Error Safety: Comprehensive error handling without any-type compromises
  • OpenAI SDK Compatibility: Direct integration with official OpenAI SDK types

📦 Installation

npm install @robota-sdk/openai @robota-sdk/agents openai

🔧 Basic Usage

Simple Chat Integration

import { Robota } from '@robota-sdk/agents';
import { OpenAIProvider } from '@robota-sdk/openai';

// Create type-safe OpenAI provider
const provider = new OpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY
});

// Create Robota agent with OpenAI provider
const agent = new Robota({
  name: 'MyAgent',
  aiProviders: [provider],
  defaultModel: {
    provider: 'openai',
    model: 'gpt-4',
    temperature: 0.7,
    systemMessage: 'You are a helpful AI assistant specialized in technical topics.'
  }
});

// Execute conversation
const response = await agent.run('Explain the benefits of TypeScript over JavaScript');
console.log(response);

// Clean up
await agent.destroy();

Streaming Responses

// Real-time streaming for immediate feedback
const stream = await agent.runStream('Write a detailed explanation of machine learning');

for await (const chunk of stream) {
  if (chunk.content) {
    process.stdout.write(chunk.content);
  }
  
  // Handle streaming metadata
  if (chunk.metadata?.isComplete) {
    console.log('\n✓ Stream completed');
  }
}

🛠️ Function Calling

OpenAI Provider supports type-safe function calling with automatic parameter validation:

import { FunctionTool } from '@robota-sdk/agents';
import { z } from 'zod';

// Define type-safe function tools
const weatherTool = new FunctionTool({
  name: 'getWeather',
  description: 'Get current weather information for a location',
  parameters: z.object({
    location: z.string().describe('City name'),
    unit: z.enum(['celsius', 'fahrenheit']).default('celsius')
  }),
  handler: async ({ location, unit }) => {
    // Type-safe handler implementation
    const weatherData = await fetchWeatherAPI(location, unit);
    return {
      temperature: weatherData.temp,
      condition: weatherData.condition,
      location,
      unit
    };
  }
});

const calculatorTool = new FunctionTool({
  name: 'calculate',
  description: 'Perform mathematical operations',
  parameters: z.object({
    operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
    a: z.number(),
    b: z.number()
  }),
  handler: async ({ operation, a, b }) => {
    const operations = {
      add: a + b,
      subtract: a - b,
      multiply: a * b,
      divide: a / b
    };
    return { result: operations[operation] };
  }
});

// Register tools with the agent
agent.registerTool(weatherTool);
agent.registerTool(calculatorTool);

// Execute with function calling
const result = await agent.run(
  'What\'s the weather in Tokyo and what\'s 25 * 4?'
);

🔄 Multi-Provider Architecture

Seamlessly integrate with other providers:

import { AnthropicProvider } from '@robota-sdk/anthropic';
import { GoogleProvider } from '@robota-sdk/google';

const openaiProvider = new OpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY
});

const anthropicProvider = new AnthropicProvider({
  apiKey: process.env.ANTHROPIC_API_KEY
});

const googleProvider = new GoogleProvider({
  apiKey: process.env.GOOGLE_AI_API_KEY
});

const agent = new Robota({
  name: 'MultiProviderAgent',
  aiProviders: [openaiProvider, anthropicProvider, googleProvider],
  defaultModel: {
    provider: 'openai',
    model: 'gpt-4'
  }
});

// Dynamic provider switching
const openaiResponse = await agent.run('Respond using GPT-4');

agent.setModel({ provider: 'anthropic', model: 'claude-3-sonnet-20240229' });
const claudeResponse = await agent.run('Respond using Claude');

⚙️ Configuration Options

interface OpenAIProviderOptions {
  // Required
  client: OpenAI;                    // OpenAI SDK client instance
  
  // Model Configuration
  model?: string;                    // Default: 'gpt-4'
  temperature?: number;              // 0-1, default: 0.7
  maxTokens?: number;               // Maximum tokens to generate
  
  // API Configuration
  apiKey?: string;                  // API key (if not set in client)
  organization?: string;            // OpenAI organization ID
  timeout?: number;                 // Request timeout (ms)
  baseURL?: string;                // Custom API base URL
  
  // Response Configuration
  responseFormat?: 'text' | 'json_object' | 'json_schema';
  jsonSchema?: {                   // For structured outputs
    name: string;
    description?: string;
    schema?: Record<string, string | number | boolean | object>;
    strict?: boolean;
  };
  
  // Debugging & Logging
  payloadLogger?: PayloadLogger;   // Environment-specific payload logger
  
  // Interface-based logger implementations:
  // - FilePayloadLogger: Node.js file-based logging
  // - ConsolePayloadLogger: Browser console-based logging
  // - Custom: Implement PayloadLogger interface
}

📋 Supported Models

| Model | Description | Use Cases | |-------|-------------|-----------| | gpt-4 | Most capable model | Complex reasoning, analysis, creative tasks | | gpt-4-turbo | Faster GPT-4 variant | Balanced performance and cost | | gpt-3.5-turbo | Fast and efficient | Simple conversations, basic tasks | | gpt-4-vision-preview | Vision capabilities | Image analysis and understanding |

🔍 API Reference

OpenAIProvider Class

class OpenAIProvider extends BaseAIProvider<
  OpenAIProviderOptions,
  UniversalMessage,
  UniversalMessage
> {
  // Core methods
  async chat(messages: UniversalMessage[], options?: ChatOptions): Promise<UniversalMessage>
  async chatStream(messages: UniversalMessage[], options?: ChatOptions): AsyncIterable<UniversalMessage>
  
  // Provider information
  readonly name: string = 'openai'
  readonly version: string = '1.0.0'
  
  // Utility methods
  supportsTools(): boolean
  validateConfig(): boolean
  async dispose(): Promise<void>
}

Type Definitions

// Chat Options
interface ChatOptions {
  tools?: ToolSchema[];
  maxTokens?: number;
  temperature?: number;
  model?: string;
}

// OpenAI-specific types
interface OpenAIToolCall {
  id: string;
  type: 'function';
  function: {
    name: string;
    arguments: string;
  };
}

interface OpenAILogData {
  model: string;
  messagesCount: number;
  hasTools: boolean;
  temperature?: number;
  maxTokens?: number;
  timestamp: string;
  requestId?: string;
}

🐛 Debugging & Logging

Environment-Specific Payload Logging

The OpenAI Provider supports environment-specific payload logging through interface-based dependency injection:

Node.js Environment (File-Based Logging)

import { OpenAIProvider } from '@robota-sdk/openai';
import { FilePayloadLogger } from '@robota-sdk/openai/loggers/file';

const provider = new OpenAIProvider({
  client: openaiClient,
  model: 'gpt-4',
  payloadLogger: new FilePayloadLogger({
    logDir: './logs/openai-api',
    enabled: true,
    includeTimestamp: true
  })
});

Browser Environment (Console-Based Logging)

import { OpenAIProvider } from '@robota-sdk/openai';
import { ConsolePayloadLogger } from '@robota-sdk/openai/loggers/console';

const provider = new OpenAIProvider({
  client: openaiClient,
  model: 'gpt-4',
  payloadLogger: new ConsolePayloadLogger({
    enabled: true,
    includeTimestamp: true
  })
});

No Logging (Both Environments)

const provider = new OpenAIProvider({
  client: openaiClient,
  model: 'gpt-4'
  // payloadLogger: undefined (default - no logging)
});

Custom Logger Implementation

You can create custom logger implementations by implementing the PayloadLogger interface:

import type { PayloadLogger, OpenAILogData } from '@robota-sdk/openai';

class CustomPayloadLogger implements PayloadLogger {
  isEnabled(): boolean {
    return true;
  }

  async logPayload(payload: OpenAILogData, type: 'chat' | 'stream'): Promise<void> {
    // Custom logging implementation
    console.log(`[Custom Logger] ${type}:`, payload);
  }
}

const provider = new OpenAIProvider({
  client: openaiClient,
  payloadLogger: new CustomPayloadLogger()
});

This creates detailed logs of all API requests and responses for debugging purposes.

🔒 Security Best Practices

API Key Management

// ✅ Good: Use environment variables
const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

// ❌ Bad: Hardcoded keys
const client = new OpenAI({
  apiKey: 'sk-...' // Never do this!
});

Error Handling

try {
  const response = await agent.run('Your query');
} catch (error) {
  if (error instanceof Error) {
    console.error('AI Error:', error.message);
  }
  // Handle specific OpenAI errors
}

📊 Performance Optimization

Token Management

const provider = new OpenAIProvider({
  client: openaiClient,
  model: 'gpt-4',
  maxTokens: 1000,        // Limit response length
  temperature: 0.3        // More deterministic responses
});

Model Selection Strategy

  • Use gpt-3.5-turbo for simple tasks
  • Use gpt-4 for complex reasoning
  • Use gpt-4-turbo for balanced performance

🤝 Contributing

This package follows strict type safety guidelines:

  • Zero any or unknown types allowed
  • Complete TypeScript coverage
  • Comprehensive error handling
  • Provider-agnostic design principles

📄 License

MIT License - see LICENSE file for details.

🔗 Related Packages


For complete documentation and examples, visit the Robota SDK Documentation.