@55387.ai/uniapi-sdk
v0.1.8
Published
TypeScript SDK for One API - unified AI client with OpenAI-compatible interface
Maintainers
Readme
@55387.ai/uniapi-sdk
TypeScript SDK for One API - Unified AI client with OpenAI-compatible interface.
Installation
npm install @55387.ai/uniapi-sdk
# or
pnpm add @55387.ai/uniapi-sdk
# or
yarn add @55387.ai/uniapi-sdkQuick Start
import { OneClient } from '@55387.ai/uniapi-sdk';
const client = new OneClient({
apiKey: 'your-api-key',
baseUrl: 'https://your-one-api-server.com', // or http://localhost:3000 for local
});
// Simple chat completion
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);API Reference
OneClient
The main client class for interacting with One API.
Constructor Options
interface OneClientOptions {
apiKey: string; // Your API key
baseUrl?: string; // API server URL (default: http://localhost:3000)
timeout?: number; // Request timeout in ms (default: 60000)
maxRetries?: number; // Max retry attempts (default: 3)
}Chat Completions
Non-streaming
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
],
temperature: 0.7,
max_tokens: 1000,
});
console.log(response.choices[0].message.content);Streaming
const stream = await client.chat.completions.create({
model: 'claude-3-sonnet',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}Multimodal Support
Text and Images
import { imageUrl, imageBase64, text } from '@55387.ai/uniapi-sdk';
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{
role: 'user',
content: [
text('What do you see in this image?'),
imageUrl('https://example.com/image.jpg'),
],
}],
});Base64 Images
const response = await client.chat.completions.create({
model: 'gemini-2.5-flash',
messages: [{
role: 'user',
content: [
text('Describe this image'),
imageBase64(base64Data, 'image/png'),
],
}],
});Advanced Features
Thinking Mode (DeepSeek R1, Claude 3.7+)
const response = await client.chat.completions.create({
model: 'claude-3-7-sonnet',
messages: [{ role: 'user', content: 'Solve this complex math problem...' }],
thinking: {
type: 'enabled',
budget_tokens: 2048
}
});
// Access thinking process
console.log('Thinking:', response.choices[0].message.reasoning_content);
console.log('Answer:', response.choices[0].message.content);Function Calling (Tool Use)
const tools = [{
type: 'function',
function: {
name: 'get_weather',
description: 'Get current weather information',
parameters: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' }
},
required: ['location']
}
}
}];
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What\'s the weather in Beijing?' }],
tools: tools,
});
// Check for tool calls
if (response.choices[0].message.tool_calls) {
const toolCall = response.choices[0].message.tool_calls[0];
console.log('Function:', toolCall.function.name);
console.log('Args:', JSON.parse(toolCall.function.arguments));
}Streaming Chat Helper
For easier streaming chat management:
const chat = client.createStreamingChat({
model: 'gemini-1.5-pro',
systemPrompt: 'You are a helpful assistant.',
});
await chat.send('Hello!', {
onContent: (content) => process.stdout.write(content),
onComplete: (fullContent) => console.log('\n--- Complete ---'),
onError: (error) => console.error('Error:', error),
});Error Handling
try {
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});
} catch (error) {
if (error.status === 401) {
console.error('Invalid API key');
} else if (error.status === 429) {
console.error('Rate limit exceeded');
} else {
console.error('API error:', error.message);
}
}Supported Models
The SDK automatically routes to the appropriate provider based on model name:
| Provider | Model Prefixes | Example Models |
|----------|----------------|----------------|
| OpenAI | gpt-*, o1* | gpt-4, gpt-3.5-turbo |
| Anthropic | claude-* | claude-3-sonnet, claude-3-haiku |
| Google Gemini | gemini-* | gemini-1.5-pro, gemini-2.0-flash |
| DeepSeek | deepseek-* | deepseek-chat, deepseek-reasoner |
| 通义千问 | qwen* | qwen-turbo, qwen-max |
| 智谱 GLM | glm-*, codegeex-* | glm-4, codegeex-2 |
| Kimi | kimi-*, moonshot-* | kimi-latest |
TypeScript Support
The SDK is written in TypeScript and provides full type safety:
import type { ChatCompletionMessageParam, ChatCompletionCreateParams } from '@55387.ai/uniapi-sdk';
// All parameters are fully typed
const params: ChatCompletionCreateParams = {
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
temperature: 0.7,
};License
MIT
