@kb-labs/adapters-openai
v2.89.0
Published
OpenAI adapter implementing ILLM and IEmbeddings interfaces
Downloads
9,656
Readme
@kb-labs/adapters-openai
Part of KB Labs ecosystem. Works exclusively within KB Labs platform.
OpenAI language model adapter supporting GPT-4, GPT-3.5, and other OpenAI models with streaming and function calling.
Overview
| Property | Value |
|----------|-------|
| Implements | ILLM |
| Type | core |
| Requires | None |
| Category | AI |
Features
- Multiple Models - GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
- Streaming Support - Real-time token streaming
- Function Calling - Native tool/function support
- Configurable - Temperature, max tokens, and more
Installation
pnpm add @kb-labs/adapters-openaiConfiguration
Add to your kb.config.json:
{
"platform": {
"adapters": {
"llm": "@kb-labs/adapters-openai"
},
"adapterOptions": {
"llm": {
"apiKey": "${OPENAI_API_KEY}",
"model": "gpt-4-turbo",
"temperature": 0.7
}
}
}
}Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| apiKey | string | - | OpenAI API key |
| model | string | "gpt-4-turbo" | Model to use |
| temperature | number | 0.7 | Sampling temperature (0.0 to 2.0) |
| maxTokens | number | - | Maximum tokens to generate |
Usage
Via Platform (Recommended)
import { usePlatform } from '@kb-labs/sdk';
const platform = usePlatform();
// Simple chat
const response = await platform.llm.chat([
{ role: 'user', content: 'Hello!' }
]);
// Streaming
for await (const chunk of platform.llm.stream([
{ role: 'user', content: 'Tell me a story' }
])) {
process.stdout.write(chunk.content);
}
// With function calling
const result = await platform.llm.chatWithTools(
[{ role: 'user', content: 'What is the weather?' }],
[{ name: 'getWeather', parameters: { ... } }]
);Standalone (Testing/Development)
import { createAdapter } from '@kb-labs/adapters-openai';
const llm = createAdapter({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4-turbo'
});
const response = await llm.chat([
{ role: 'user', content: 'Hello!' }
]);Adapter Manifest
{
id: 'openai-llm',
name: 'OpenAI LLM',
version: '1.0.0',
implements: 'ILLM',
capabilities: {
streaming: true,
custom: {
functionCalling: true,
},
},
}FAQ
Change the model option:
{
"adapterOptions": {
"llm": {
"model": "gpt-3.5-turbo"
}
}
}The adapter includes automatic retry with exponential backoff. For high-volume usage, consider using OpenAI's usage tiers or implement request queuing.
Related Adapters
| Adapter | Use Case |
|---------|----------|
| @kb-labs/adapters-vibeproxy | Local multi-provider proxy (Claude, GPT, etc.) |
License
KB Public License v1.1 - KB Labs Team
