@divetocode/aibot-bridge
v0.9.0
Published
Unified TypeScript bridge for multiple AI providers (OpenAI, Claude, Gemini, Groq, Hugging Face) with consistent API and Telegram integration
Maintainers
Readme
Multi-AI Text Bridge
A unified TypeScript library for interfacing with multiple AI providers including OpenAI, Claude (Anthropic), Gemini, Groq, and Hugging Face. Simplify your AI integrations with a single, consistent API across all major language model providers.
Features
- Multi-Provider Support: OpenAI, Claude/Anthropic, Gemini, Groq, and Hugging Face
- Consistent API: Same interface across all providers
- TypeScript First: Full type safety with intelligent model suggestions
- Flexible Configuration: Environment variables or direct API key configuration
- Telegram Integration: Built-in support for sending AI responses to Telegram
- Temperature Control: Fine-tune response creativity per request
- System Prompts: Configure AI behavior with system-level instructions
Installation
npm install @divetocode/aibot-bridge
# or
yarn add @divetocode/aibot-bridgeQuick Start
import { AIBotBridge } from '@divetocode/aibot-bridge';
// Initialize with OpenAI
const bridge = new AIBotBridge({
provider: 'openai',
model: 'gpt-4o',
openaiApiKey: 'your-api-key', // or use OPENAI_API_KEY env var
systemPrompt: 'You are a helpful assistant.',
temperature: 0.7
});
// Ask a question
const response = await bridge.ask('What is the capital of France?');
console.log(response); // "The capital of France is Paris."Supported Providers
OpenAI
const bridge = new AIBotBridge({
provider: 'openai',
model: 'gpt-4o', // or 'gpt-4o-mini', 'gpt-4.1-mini'
openaiApiKey: process.env.OPENAI_API_KEY
});Claude (Anthropic)
const bridge = new AIBotBridge({
provider: 'claudeai', // or 'anthropic' for legacy
model: 'claude-3-7-sonnet-20250219',
anthropicApiKey: process.env.ANTHROPIC_API_KEY
});Gemini (Google)
const bridge = new AIBotBridge({
provider: 'gemini',
model: 'gemini-1.5-pro', // or 'gemini-1.5-flash'
geminiApiKey: process.env.GEMINI_API_KEY
});Groq
const bridge = new AIBotBridge({
provider: 'groq',
model: 'llama-3.1-70b-versatile',
groqApiKey: process.env.GROQ_API_KEY
});Hugging Face
const bridge = new AIBotBridge({
provider: 'huggingface',
model: 'mistralai/Mistral-7B-Instruct-v0.2',
hfApiKey: process.env.HF_API_KEY
});API Reference
Constructor Options
type BridgeOptions = {
provider: 'openai' | 'claudeai' | 'anthropic' | 'gemini' | 'groq' | 'huggingface';
model?: string; // Provider-specific model name
systemPrompt?: string; // Global system prompt
temperature?: number; // Response creativity (0-1)
telegramToken?: string; // For askAndSend functionality
// Provider-specific API keys (optional if using environment variables)
openaiApiKey?: string;
anthropicApiKey?: string;
geminiApiKey?: string;
groqApiKey?: string;
hfApiKey?: string;
};Methods
ask(userMessage: string, options?: AskOptions): Promise<string>
Send a message to the AI and get a text response.
const response = await bridge.ask('Explain quantum computing');
// With options
const response = await bridge.ask('Write a poem', {
systemPrompt: 'You are a creative poet.',
temperature: 0.9
});askAndSend(chatId: number | string, userMessage: string, options?: AskOptions): Promise<string>
Send a message to AI and forward the response to a Telegram chat.
// Requires telegramToken in constructor
const bridge = new AIBotBridge({
provider: 'openai',
telegramToken: process.env.TELEGRAM_BOT_TOKEN
});
const response = await bridge.askAndSend(
123456789, // Telegram chat ID
'What\'s the weather like?'
);Ask Options
interface AskOptions {
systemPrompt?: string; // Override global system prompt
temperature?: number; // Override global temperature
}Environment Variables
Set these environment variables to avoid passing API keys directly:
# OpenAI
OPENAI_API_KEY=your_openai_api_key
# Anthropic/Claude
ANTHROPIC_API_KEY=your_anthropic_api_key
# Google Gemini
GEMINI_API_KEY=your_gemini_api_key
# Groq
GROQ_API_KEY=your_groq_api_key
# Hugging Face
HF_API_KEY=your_huggingface_api_key
# Telegram (optional)
TELEGRAM_BOT_TOKEN=your_telegram_bot_tokenModel Options
OpenAI Models
gpt-4o(default)gpt-4o-minigpt-4.1-mini- Any other OpenAI model string
Claude Models
claude-3-7-sonnet-20250219(default)claude-3-5-haiku-20241022claude-3-opus-20240229- Any other Claude model string
Gemini Models
gemini-1.5-pro(default)gemini-1.5-flash- Any other Gemini model string
Groq Models
llama-3.1-70b-versatile(default)llama-3.1-8b-instantmixtral-8x7b-32768gemma2-9b-it- Any other Groq model string
Hugging Face Models
mistralai/Mistral-7B-Instruct-v0.2(default)meta-llama/Meta-Llama-3-8B-Instruct- Any other Hugging Face model string
Usage Examples
Basic Text Generation
import { AIBotBridge } from 'multi-ai-text-bridge';
const bridge = new AIBotBridge({
provider: 'openai',
systemPrompt: 'You are a helpful coding assistant.'
});
const code = await bridge.ask('Write a Python function to calculate fibonacci numbers');
console.log(code);Provider Comparison
const providers = ['openai', 'claudeai', 'gemini'] as const;
for (const provider of providers) {
const bridge = new AIBotBridge({ provider });
const response = await bridge.ask('What is artificial intelligence?');
console.log(`${provider}: ${response.slice(0, 100)}...`);
}Dynamic Temperature Control
const bridge = new AIBotBridge({
provider: 'claudeai',
temperature: 0.3 // Conservative default
});
// Creative writing with high temperature
const story = await bridge.ask('Write a short story about robots', {
temperature: 0.9
});
// Factual response with low temperature
const facts = await bridge.ask('List 5 facts about Mars', {
temperature: 0.1
});Telegram Bot Integration
import TelegramBot from 'node-telegram-bot-api';
import { AIBotBridge } from 'multi-ai-text-bridge';
const bot = new TelegramBot(process.env.TELEGRAM_BOT_TOKEN!, { polling: true });
const bridge = new AIBotBridge({
provider: 'gemini',
telegramToken: process.env.TELEGRAM_BOT_TOKEN
});
bot.on('message', async (msg) => {
if (msg.text && msg.chat.id) {
// AI responds and sends to chat automatically
await bridge.askAndSend(msg.chat.id, msg.text);
}
});Multi-Model Ensemble
async function getMultipleResponses(question: string) {
const bridges = [
new AIBotBridge({ provider: 'openai' }),
new AIBotBridge({ provider: 'claudeai' }),
new AIBotBridge({ provider: 'gemini' })
];
const responses = await Promise.all(
bridges.map(async (bridge, i) => ({
provider: ['OpenAI', 'Claude', 'Gemini'][i],
response: await bridge.ask(question)
}))
);
return responses;
}
const results = await getMultipleResponses('Explain machine learning');
results.forEach(r => console.log(`${r.provider}: ${r.response}`));Error Handling
import { BridgeConfigError } from 'multi-ai-text-bridge';
try {
const bridge = new AIBotBridge({
provider: 'openai'
// Missing API key
});
} catch (error) {
if (error instanceof BridgeConfigError) {
console.error('Configuration error:', error.message);
}
}
try {
const response = await bridge.ask('Hello');
} catch (error) {
console.error('API error:', error.message);
}Custom System Prompts per Request
const bridge = new AIBotBridge({
provider: 'claudeai'
});
// Different personalities
const responses = await Promise.all([
bridge.ask('Tell me about JavaScript', {
systemPrompt: 'You are an enthusiastic teacher who loves to explain things clearly.'
}),
bridge.ask('Tell me about JavaScript', {
systemPrompt: 'You are a senior developer who gives concise, practical advice.'
}),
bridge.ask('Tell me about JavaScript', {
systemPrompt: 'You are a poet who explains technical concepts through metaphors.'
})
]);Advanced Configuration
Custom Model Names
// Use cutting-edge models as they become available
const bridge = new AIBotBridge({
provider: 'openai',
model: 'gpt-5-turbo' // Future model
});Configuration Factory
class AIBridgeFactory {
static create(provider: string, config?: Partial<BridgeOptions>) {
const defaults = {
temperature: 0.7,
systemPrompt: 'You are a helpful assistant.'
};
return new AIBotBridge({
provider: provider as any,
...defaults,
...config
});
}
}
const bridge = AIBridgeFactory.create('gemini', {
temperature: 0.9,
model: 'gemini-1.5-flash'
});Performance Tips
- Reuse Bridge Instances: Create one instance per provider to avoid re-initialization overhead
- Batch Requests: Use Promise.all for concurrent requests when possible
- Choose Appropriate Models: Smaller models (like mini variants) are faster for simple tasks
- Set Reasonable Timeouts: Some providers may have longer response times
- Cache Responses: Implement caching for frequently asked questions
Error Types
BridgeConfigError
Thrown when there are configuration issues:
- Missing API keys
- Invalid provider names
- Missing required parameters
try {
const bridge = new AIBotBridge({
provider: 'invalid-provider' as any
});
} catch (error) {
console.log(error instanceof BridgeConfigError); // true
}Provider-Specific Notes
OpenAI
- Uses the Chat Completions API
- Supports all GPT models
- Best general-purpose performance
Claude (Anthropic)
- Uses the Messages API
- Excellent for creative and analytical tasks
- Max tokens limit of 1024 in current implementation
Gemini (Google)
- Uses the Generative AI SDK
- Strong performance on factual queries
- Good multilingual support
Groq
- Extremely fast inference
- Limited model selection
- Great for real-time applications
Hugging Face
- Access to open-source models
- Variable response quality
- Custom response parsing implementation
TypeScript Support
Full TypeScript support with intelligent autocompletion:
// Provider and model suggestions
const bridge = new AIBotBridge({
provider: 'openai', // Autocompletes: openai, claudeai, gemini, etc.
model: 'gpt-4o' // Autocompletes based on provider
});
// Type-safe options
const options: AskOptions = {
temperature: 0.8, // number
systemPrompt: 'Hello' // string
};Contributing
We welcome contributions! Please follow these steps:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
License
MIT License - see LICENSE file for details.
Support
- GitHub Issues: Report bugs or request features
Simplify your AI integrations with Multi-AI Text Bridge
