@bizrouter/ai-sdk-provider
v1.4.6
Published
The [BizRouter](https://bizrouter.ai/) provider for the [Vercel AI SDK](https://sdk.vercel.ai/docs) provides access to multiple large language models (OpenAI, Anthropic, Google Gemini, xAI Grok) through the BizRouter API gateway, optimized for the Korean
Readme
BizRouter Provider for Vercel AI SDK
The BizRouter provider for the Vercel AI SDK provides access to multiple large language models (OpenAI, Anthropic, Google Gemini, xAI Grok) through the BizRouter API gateway, optimized for the Korean market.
Setup for AI SDK v5
# For bun
bun add @bizrouter/ai-sdk-provider
# For pnpm
pnpm add @bizrouter/ai-sdk-provider
# For npm
npm install @bizrouter/ai-sdk-providerProvider Instance
You can import the default provider instance bizrouter from @bizrouter/ai-sdk-provider:
import { bizrouter } from '@bizrouter/ai-sdk-provider';Example
import { bizrouter } from '@bizrouter/ai-sdk-provider';
import { generateText } from 'ai';
const { text } = await generateText({
model: bizrouter('anthropic/claude-sonnet-4.5'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});Configuration
Set your BizRouter API key:
export BIZROUTER_API_KEY="sk-br-v1-your-api-key"Or pass it directly:
import { createBizRouter } from '@bizrouter/ai-sdk-provider';
const bizrouter = createBizRouter({
apiKey: 'sk-br-v1-your-api-key',
// Optional: Override the base URL
baseURL: 'https://api.bizrouter.ai/ai-sdk',
});Authentication
The AI SDK Provider uses standard Bearer token authentication:
- Endpoint:
https://api.bizrouter.ai/ai-sdk/* - Header:
Authorization: Bearer sk-br-v1-your-api-key
Features
- Streaming Support: Real-time text generation with SSE
- Function Calling: Tool use and parallel function calling
- Multimodal: Image analysis and generation
- Usage Tracking: Token usage and cost tracking (in KRW)
- Reasoning Support: Access to model reasoning process
- Provider Routing: Automatic failover and load balancing
Supported Models
BizRouter provides access to models from multiple providers. Visit BizRouter Models for the complete and up-to-date list.
Popular Models
- OpenAI:
openai/gpt-5.1,openai/gpt-5,openai/gpt-5-mini - Anthropic:
anthropic/claude-sonnet-4.5,anthropic/claude-haiku-4.5 - Google:
google/gemini-3-pro-preview,google/gemini-2.5-pro - xAI:
x-ai/grok-4-fast,x-ai/grok-4 - Perplexity:
perplexity/sonar-pro,perplexity/sonar-deep-research
Streaming with Usage Tracking
import { bizrouter } from '@bizrouter/ai-sdk-provider';
import { streamText } from 'ai';
const { textStream, usage } = await streamText({
model: bizrouter('anthropic/claude-sonnet-4.5'),
prompt: 'Tell me about Seoul.',
});
for await (const chunk of textStream) {
console.log(chunk);
}
// Get usage after streaming completes
const finalUsage = await usage;
console.log('Input tokens:', finalUsage.inputTokens);
console.log('Output tokens:', finalUsage.outputTokens);
console.log('Total tokens:', finalUsage.totalTokens);Advanced Streaming Features
Get detailed metadata and performance metrics from streaming responses:
import { bizrouter } from '@bizrouter/ai-sdk-provider';
import { streamText } from 'ai';
const { textStream, usage, finishReason, response } = await streamText({
model: bizrouter('openai/gpt-5.1'),
messages: [
{
role: 'system',
content: 'You are a helpful assistant.',
},
{
role: 'user',
content: 'What are the top 3 attractions in Seoul?',
},
],
maxOutputTokens: 1000,
});
// Track streaming progress
let chunkCount = 0;
let fullText = '';
const startTime = Date.now();
for await (const chunk of textStream) {
process.stdout.write(chunk);
fullText += chunk;
chunkCount++;
}
const duration = (Date.now() - startTime) / 1000;
// Get response metadata
const metadata = await response;
console.log(`Request ID: ${metadata.id}`);
console.log(`Model used: ${metadata.modelId}`);
console.log(`Timestamp: ${metadata.timestamp}`);
// Get streaming statistics
console.log(`Total chunks: ${chunkCount}`);
console.log(`Streaming duration: ${duration.toFixed(2)}s`);
console.log(`Characters per second: ${(fullText.length / duration).toFixed(0)}`);
// Get finish reason
const reason = await finishReason;
console.log(`Finish reason: ${reason}`);Tool Calling
import { bizrouter } from '@bizrouter/ai-sdk-provider';
import { generateText, jsonSchema } from 'ai';
const result = await generateText({
model: bizrouter('openai/gpt-4o'),
prompt: '서울과 부산의 현재 날씨가 어떤지 알려줘.',
tools: {
getWeather: {
description: 'Get the current weather in a location',
inputSchema: jsonSchema({
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city name, e.g. Seoul',
},
},
required: ['location'],
}),
execute: async ({ location }) => {
console.log(`getWeather tool executed with location: ${location}`);
// Your implementation here
return {
location,
temperature: 20,
condition: 'sunny',
unit: 'celsius',
};
},
},
},
});
// Extract text response from the last step
const lastStep = result.steps[result.steps.length - 1];
const textContent = lastStep.content.find(c => c.type === 'text');
console.log(textContent?.text);
// Extract tool calls from steps
for (const step of result.steps) {
const toolCalls = step.content.filter(c => c.type === 'tool-call');
for (const call of toolCalls) {
console.log(`Tool called: ${call.toolName}(${JSON.stringify(call.input)})`);
}
}Image Generation
BizRouter supports image generation through multimodal models like Google Gemini's image generation models.
Basic Image Generation
import { bizrouter } from '@bizrouter/ai-sdk-provider';
import { generateText } from 'ai';
import { extractImages } from '@bizrouter/ai-sdk-provider/image';
const result = await generateText({
model: bizrouter('google/gemini-2.5-flash-image'),
prompt: 'Draw a cute cat wearing a party hat',
});
// Extract images from the result
const { images, text } = extractImages(result);
console.log('Description:', text);
for (const image of images) {
console.log(`Image type: ${image.mediaType}`);
console.log(`Image size: ${image.uint8Array.length} bytes`);
// Save to file (Node.js)
const fs = await import('fs/promises');
await fs.writeFile('cat.png', image.uint8Array);
}Supported Image Generation Models
| Model | Description | Max Resolution |
|-------|-------------|----------------|
| google/gemini-2.5-flash-image | Fast image generation | 1K |
| google/gemini-3.1-flash-image-preview | Fast image generation | 2K |
| google/gemini-3-pro-image-preview | High-quality image generation | 4K |
Image Generation Options
Gemini image models support optional configuration for aspect ratio and resolution.
Aspect Ratios (both models):
1:1,2:3,3:2,3:4,4:3,4:5,5:4,9:16,16:9,21:9
Image Sizes (Gemini 3 Pro only):
1K(default),2K,4K
Note: Use uppercase 'K' only (e.g.,
2Knot2k).
const result = await generateText({
model: bizrouter('google/gemini-3-pro-image-preview'),
prompt: 'A beautiful landscape',
providerOptions: {
bizrouter: {
aspect_ratio: '16:9',
image_size: '2K',
},
},
});Image Data Formats
The extractImages function returns images in multiple formats:
interface GeneratedImage {
// Base64-encoded image data (without data URI prefix)
base64: string;
// Binary data for direct file operations
uint8Array: Uint8Array;
// MIME type (e.g., 'image/png', 'image/jpeg')
mediaType: string;
}Image-to-Image Generation
Use a reference image along with a text prompt to transform images or generate new styles.
import { bizrouter } from '@bizrouter/ai-sdk-provider';
import { generateText } from 'ai';
import { extractImages } from '@bizrouter/ai-sdk-provider/image';
import fs from 'fs/promises';
// Load reference image as Base64 Data URL
async function loadImageAsDataUrl(filePath: string): Promise<string> {
const buffer = await fs.readFile(filePath);
const base64 = buffer.toString('base64');
return `data:image/jpeg;base64,${base64}`;
}
// Transform image to Van Gogh style
const refImage = await loadImageAsDataUrl('./my-photo.jpg');
const result = await generateText({
model: bizrouter('google/gemini-2.5-flash-image'),
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: `Recreate this image in the style of Van Gogh's "Starry Night".
Apply swirling skies and bold brushstrokes,
emphasizing blue and yellow tones.`,
},
{
type: 'image',
image: refImage,
},
],
},
],
providerOptions: {
bizrouter: {
aspect_ratio: '16:9',
},
},
});
const { images } = extractImages(result);
for (const image of images) {
await fs.writeFile('styled-output.png', image.uint8Array);
console.log(`Generated: styled-output.png (${image.uint8Array.length} bytes)`);
}Multimodal Support
Basic Image Analysis
import { bizrouter } from '@bizrouter/ai-sdk-provider';
import { generateText } from 'ai';
// OpenAI와 Google 모델은 URL 직접 지원
const { text } = await generateText({
model: bizrouter('openai/gpt-5.1'),
messages: [{
role: 'user',
content: [
{ type: 'text', text: '이 이미지에 무엇이 있나요?' },
{ type: 'image', image: 'https://example.com/image.jpg' },
],
}],
});Anthropic Models (Base64 Required)
// 이미지 URL을 base64로 변환하는 헬퍼 함수
async function imageUrlToBase64(url: string): Promise<string> {
const response = await fetch(url);
const buffer = await response.arrayBuffer();
const base64 = Buffer.from(buffer).toString('base64');
const contentType = response.headers.get('content-type') || 'image/jpeg';
return `data:${contentType};base64,${base64}`;
}
// Anthropic 모델 사용
const imageBase64 = await imageUrlToBase64('https://example.com/image.jpg');
const { text } = await generateText({
model: bizrouter('anthropic/claude-haiku-4.5'),
messages: [{
role: 'user',
content: [
{ type: 'text', text: '이 이미지를 분석해주세요.' },
{ type: 'image', image: imageBase64 },
],
}],
maxOutputTokens: 500,
});Multiple Images Comparison
여러 이미지를 동시에 분석하고 비교:
const { text } = await generateText({
model: bizrouter('anthropic/claude-haiku-4.5'),
messages: [{
role: 'user',
content: [
{ type: 'text', text: '이 두 이미지의 차이점을 비교해주세요.' },
{ type: 'image', image: await imageUrlToBase64('https://example.com/image1.jpg') },
{ type: 'image', image: await imageUrlToBase64('https://example.com/image2.jpg') },
],
}],
maxOutputTokens: 1000,
});