@varlabs/ai.openai
v0.1.7
Published
AI sdk for interfacing with AI models
Maintainers
Readme
@varlabs/ai.openai
A comprehensive, type-safe OpenAI provider for the @varlabs/ai SDK.
Features
Complete API Coverage - Support for all OpenAI API endpoints including:
- Text generation (ChatGPT, GPT-4, etc.)
- Image generation (DALL-E models)
- Audio processing (speech synthesis and transcription)
- Embeddings
- Function calling
- Structured output
Type Safety - Fully typed interfaces for all API endpoints and models with proper type inference.
Streaming Support - First-class support for streaming responses from OpenAI.
Custom Tools Integration - Easy integration with custom tools and function calling.
Advanced Features - Support for file search, web search, reasoning, and more.
Installation
npm install @varlabs/ai.openai
# or
yarn add @varlabs/ai.openai
# or
pnpm add @varlabs/ai.openaiUsage
Basic Usage
import { createAIClient } from '@varlabs/ai';
import openAiProvider from '@varlabs/ai.openai';
const client = createAIClient({
providers: {
openai: openAiProvider({
config: {
apiKey: 'your-api-key',
baseUrl: 'https://api.openai.com/v1' // optional, defaults to this value
}
})
}
});
// Text generation
const response = await client.openai.text.create_response({
model: 'gpt-4o',
input: 'Tell me a joke about programming.'
});
// Image generation
const image = await client.openai.images.create({
model: 'dall-e-3',
prompt: 'A robot writing code in a futuristic office'
});
// Audio transcription
const transcription = await client.openai.speech.transcribe_audio({
model: 'whisper-1',
file: audioFile //blob or file object
});Streaming Responses
const stream = await client.openai.text.stream_response({
model: 'gpt-4o',
input: 'Write a short story about AI.',
});
// Handle the stream
for await (const chunk of stream) {
// Process each chunk of the response
console.log(chunk);
}Using Custom Tools
import { customTool } from '@varlabs/ai.openai';
const response = await client.openai.text.create_response({
model: 'gpt-4o',
input: 'What\'s the weather in New York?',
custom_tools: {
get_weather: customTool({
description: 'Get current weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string' }
},
required: ['location']
},
execute: async (params) => {
// Implementation to fetch weather data
return { temperature: 72, conditions: 'sunny' };
}
})
}
});Structured Output
const response = await client.openai.text.create_response({
model: 'gpt-4o',
input: 'Extract the name and age from: John Doe is 30 years old',
structured_output: {
name: 'PersonInfo',
schema: {
type: 'object',
properties: {
name: { type: 'string', description: 'Full name of the person' },
age: { type: 'number', description: 'Age in years', required: false }
},
}
}
});API Reference
The provider implements all OpenAI API endpoints through the following structure:
text- Text generation and chat modelscreate_response- Generate text responsesget_response- Get a previously generated responsedelete_response- Delete a responselist_input_item_list- List items for a response
images- Image generation modelscreate- Create imagesedit- Edit existing imagesgenerate_variations- Create variations of images
speech- Audio processing modelsgenerate_audio- Generate speech from texttranscribe_audio- Transcribe audio to texttranslate_audio- Translate audio to English text
embedding- Embedding modelsembed- Create embeddings for text
License
MIT © Hamza Varvani
