@jutech-devs/universal-ai
v1.0.0
Published
Universal AI SDK supporting multiple providers like OpenAI, Anthropic, Google, and more
Maintainers
Readme
Universal AI SDK
A unified SDK for interacting with ALL major AI providers with React hooks support.
Supported Providers & Models
🤖 OpenAI
- Latest:
gpt-4o,gpt-4o-mini,o1-preview,o1-mini - GPT-4:
gpt-4-turbo,gpt-4,gpt-4-32k - GPT-3.5:
gpt-3.5-turbo,gpt-3.5-turbo-16k - Legacy:
text-davinci-003,code-davinci-002
🧠 Anthropic Claude
- Claude 3.5:
claude-3-5-sonnet-20241022,claude-3-5-haiku-20241022 - Claude 3:
claude-3-opus-20240229,claude-3-sonnet-20240229,claude-3-haiku-20240307 - Claude 2:
claude-2.1,claude-2.0,claude-instant-1.2
🔍 Google Gemini
- Gemini 2.0:
gemini-2.0-flash-exp - Gemini 1.5:
gemini-1.5-pro-002,gemini-1.5-pro,gemini-1.5-flash-002,gemini-1.5-flash,gemini-1.5-flash-8b - Legacy:
gemini-pro,gemini-pro-vision,gemini-ultra
🚀 Groq (Ultra-Fast Inference)
- Llama 3.3:
llama-3.3-70b-versatile - Llama 3.2:
llama-3.2-90b-text-preview,llama-3.2-11b-text-preview - Llama 3.1:
llama-3.1-70b-versatile,llama-3.1-8b-instant - Mixtral:
mixtral-8x7b-32768 - Gemma:
gemma2-9b-it,gemma-7b-it
🎯 Cohere
- Command R:
command-r-plus,command-r - Command:
command,command-nightly,command-light
🌟 Mistral AI
- Large:
mistral-large-latest,mistral-large-2407 - Medium/Small:
mistral-medium-latest,mistral-small-latest - Open Source:
open-mistral-7b,open-mixtral-8x7b,open-mixtral-8x22b - Code:
codestral-latest,codestral-2405
🤖 xAI (Grok)
- Grok:
grok-beta,grok-vision-beta
🧮 DeepSeek
- Models:
deepseek-chat,deepseek-coder,deepseek-reasoner
API Reference
UniversalAI Class
const ai = new UniversalAI(config);
// Chat completion
const response = await ai.chat(messages);
// Streaming
for await (const chunk of ai.stream(messages)) {
console.log(chunk.content);
}
// Get available models
const models = ai.getModels();
// Update configuration
ai.updateConfig({ temperature: 0.8 });React Hooks
useAI
const {
messages, // Array of chat messages
isLoading, // Loading state
error, // Error message
response, // Last AI response
sendMessage, // Function to send a message
clearMessages, // Function to clear chat history
updateConfig, // Function to update AI config
models, // Available models for current provider
} = useAI(config);useAIStream
const {
messages, // Array of chat messages
streamingContent, // Current streaming content
isLoading, // Loading state
error, // Error message
sendMessage, // Function to send a message
clearMessages, // Function to clear chat history
models, // Available models
} = useAIStream(config);Configuration Options
interface UniversalAIConfig {
provider: 'openai' | 'anthropic' | 'google';
apiKey: string;
model: string;
temperature?: number; // 0-1, default: 0.7
maxTokens?: number; // Max response tokens
topP?: number; // 0-1, nucleus sampling
baseURL?: string; // Custom API endpoint
}Examples
Multi-Provider Chat
import { UniversalAI } from '@jutech-devs/universal-ai';
// OpenAI
const openai = new UniversalAI({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4',
});
// Anthropic
const claude = new UniversalAI({
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-5-sonnet-20241022',
});
// Google
const gemini = new UniversalAI({
provider: 'google',
apiKey: process.env.GOOGLE_API_KEY!,
model: 'gemini-1.5-pro',
});
const messages = [
{ role: 'user', content: 'Explain quantum computing' }
];
const [openaiResponse, claudeResponse, geminiResponse] = await Promise.all([
openai.chat(messages),
claude.chat(messages),
gemini.chat(messages),
]);Custom System Prompts
import { createSystemMessage, createUserMessage } from '@jutech-devs/universal-ai';
const messages = [
createSystemMessage('You are a helpful coding assistant.'),
createUserMessage('How do I create a React component?'),
];
const response = await ai.chat(messages);Token Usage Tracking
const response = await ai.chat(messages);
if (response.usage) {
console.log(`Tokens used: ${response.usage.totalTokens}`);
console.log(`Prompt tokens: ${response.usage.promptTokens}`);
console.log(`Completion tokens: ${response.usage.completionTokens}`);
}License
MIT © JuTech Devs
