@55387.ai/uniapi-core
v0.1.8
Published
Core library for One API - unified AI provider adapters with OpenAI-compatible interface
Maintainers
Readme
@55387.ai/uniapi-core
Core library for One API - Unified AI provider adapters with OpenAI-compatible interface.
Installation
npm install @55387.ai/uniapi-core
# or
pnpm add @55387.ai/uniapi-core
# or
yarn add @55387.ai/uniapi-coreOverview
@55387.ai/uniapi-core provides the foundation for building AI API integrations. It includes:
- Provider Adapters: Standardized interfaces for different AI providers
- Model Routing: Automatic routing based on model names
- Configuration Management: Environment-based config loading
- Error Handling: Consistent error types and handling
- Validation: Request/response validation
- Logging: Structured logging utilities
Quick Start
import { ModelRouter, ConfigLoader } from '@55387.ai/uniapi-core';
// Load configuration
const config = new ConfigLoader().load();
// Create router
const router = new ModelRouter(config);
// Get provider for a model
const provider = router.getProvider('gpt-4');
// Make a request
const response = await provider.chatCompletion({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
});Architecture
Provider Pattern
All AI providers implement the BaseProvider interface:
abstract class BaseProvider {
abstract chatCompletion(request: ChatCompletionRequest): Promise<ChatCompletionResponse>;
abstract chatCompletionStream(request: ChatCompletionRequest): AsyncIterable<ChatCompletionChunk>;
abstract createEmbedding(request: EmbeddingRequest): Promise<EmbeddingResponse>;
}Built-in Providers
| Provider | Class | Supported Models |
|----------|-------|------------------|
| OpenAI | OpenAIProvider | gpt-*, o1* |
| Anthropic | AnthropicProvider | claude-* |
| Google Gemini | GeminiProvider | gemini-* |
| DeepSeek | DeepSeekProvider | deepseek-* |
| 通义千问 | QwenProvider | qwen* |
| 智谱 GLM | GLMProvider | glm-*, codegeex-* |
| Kimi | KimiProvider | kimi-*, moonshot-* |
Adding a New Provider
- Create Provider Class:
import { BaseProvider, ChatCompletionRequest, ChatCompletionResponse } from '@55387.ai/uniapi-core';
export class MyProvider extends BaseProvider {
async chatCompletion(request: ChatCompletionRequest): Promise<ChatCompletionResponse> {
// Your implementation here
const response = await this.callMyAPI(request);
return this.convertToOpenAIFormat(response);
}
async *chatCompletionStream(request: ChatCompletionRequest): AsyncIterable<ChatCompletionChunk> {
// Streaming implementation
}
async createEmbedding(request: EmbeddingRequest): Promise<EmbeddingResponse> {
// Embedding implementation
}
}- Register in Router:
import { ModelRouter } from '@55387.ai/uniapi-core';
const router = new ModelRouter(config);
router.registerProvider('my-provider', new MyProvider(config));- Update Model Routing:
// In your configuration or router setup
router.addModelMapping('my-model-*', 'my-provider');Configuration
Environment Variables
# API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_GENAI_API_KEY=...
# Other settings
ONE_API_KEYS=key1,key2,key3
LOG_LEVEL=infoConfigLoader
import { ConfigLoader } from '@55387.ai/uniapi-core';
const config = new ConfigLoader({
// Custom config options
}).load();
console.log(config.apiKeys);
console.log(config.providers.openai.apiKey);Error Handling
The library provides consistent error types:
import { OneAPIError, ValidationError, ProviderError } from '@55387.ai/uniapi-core';
try {
const response = await provider.chatCompletion(request);
} catch (error) {
if (error instanceof ValidationError) {
console.error('Invalid request:', error.message);
} else if (error instanceof ProviderError) {
console.error('Provider error:', error.details);
} else if (error instanceof OneAPIError) {
console.error('API error:', error.statusCode, error.message);
}
}Logging
import { Logger } from '@55387.ai/uniapi-core';
const logger = new Logger('my-component');
logger.info('Processing request', { model: 'gpt-4' });
logger.error('Request failed', { error: error.message });Validation
Request validation using Zod schemas:
import { validateChatCompletionRequest } from '@55387.ai/uniapi-core';
const request = { model: 'gpt-4', messages: [...] };
const validated = validateChatCompletionRequest(request);
// validated is now type-safeType Definitions
The library exports comprehensive TypeScript types:
import type {
ChatCompletionRequest,
ChatCompletionResponse,
ChatMessage,
ProviderConfig,
ModelInfo
} from '@55387.ai/uniapi-core';Utilities
Content Conversion
import { convertContentToParts } from '@55387.ai/uniapi-core';
// Convert OpenAI-style content to provider-specific format
const parts = convertContentToParts([
{ type: 'text', text: 'Hello' },
{ type: 'image_url', image_url: { url: '...' } }
]);Retry Logic
import { withRetry } from '@55387.ai/uniapi-core';
const result = await withRetry(
() => provider.chatCompletion(request),
{ maxAttempts: 3, delay: 1000 }
);Development
Building
pnpm buildTesting
pnpm testAdding Tests
import { describe, it, expect } from 'vitest';
import { MyProvider } from './my-provider';
describe('MyProvider', () => {
it('should handle chat completion', async () => {
const provider = new MyProvider(config);
const response = await provider.chatCompletion({
model: 'my-model',
messages: [{ role: 'user', content: 'Hello' }]
});
expect(response.choices[0].message.content).toBeDefined();
});
});License
MIT
