algochat-ai
v1.0.0
Published
Unofficial Node.js client for algochat.app — get AI responses (Gemini models) without needing an account. Works with Baileys WhatsApp bots and any Node.js app.
Maintainers
Readme
algochat-ai
Unofficial Node.js client for algochat.app — access Google Gemini AI models for free, no API key required. Perfect for WhatsApp bots (Baileys), Discord bots, and any Node.js application.
✨ Features
- 🆓 Free — No API key, no account needed
- 🤖 Gemini models — Access Google Gemini 3 Flash Preview and more
- 🖼️ Multimodal — Send images (URLs or base64) alongside text
- 🌊 Streaming — Real-time token-by-token responses
- 📱 Baileys-ready — Drop-in integration for WhatsApp bots
- 🔄 OpenAI-compatible — Same message format as the OpenAI SDK
- 🔁 Auto session management — Sessions cached, refreshed on expiry, auto-retry on 401/403
- 📦 TypeScript — Full
.d.tstype declarations included
📋 Table of Contents
- Installation
- Quick Start
- API Reference
- Available Models
- Baileys WhatsApp Integration
- OpenAI SDK Compatibility
- Conversation History
- Multimodal Images
- Error Handling
- TypeScript Usage
- Examples Index
- How It Works
- Limitations
- License
📦 Installation
npm install algochat-aior with yarn:
yarn add algochat-aiRequirements: Node.js ≥ 16.0.0
🚀 Quick Start
const algochat = require('algochat-ai');
// Simple one-liner
const reply = await algochat.chat('What is the capital of France?');
console.log(reply); // "The capital of France is Paris."Or using ES module syntax:
import { chat } from 'algochat-ai';
const reply = await chat('Explain quantum entanglement in simple terms.');
console.log(reply);📖 API Reference
chat(input, options?)
Send a message and receive the full response as a string.
const algochat = require('algochat-ai');
// String shorthand
const reply = await algochat.chat('Hello!');
// Full messages array (OpenAI style)
const reply = await algochat.chat([
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is 2+2?' },
]);
// With options
const reply = await algochat.chat('Tell me a joke.', {
model: 'gemini-3-flash-preview',
systemPrompt: 'You are a stand-up comedian.',
timeout: 30000,
debug: true,
});Parameters:
| Parameter | Type | Description |
|-----------|------|-------------|
| input | string \| Message[] | A plain text message or an OpenAI-style messages array |
| options.model | string | Model ID (see Available Models). Default: gemini-3-flash-preview |
| options.systemPrompt | string | Custom system prompt |
| options.timeout | number | Request timeout in ms. Default: 90000 |
| options.debug | boolean | Print verbose logs. Default: false |
Returns: Promise<string> — the AI's response text
chatStream(input, onChunk, options?)
Stream the response token by token as it's generated. Great for terminal UIs or real-time UX.
const algochat = require('algochat-ai');
await algochat.chatStream(
'Write a short story about a robot.',
(chunk) => {
process.stdout.write(chunk); // Print each token as it arrives
}
);With async/await and full text:
const fullText = await algochat.chatStream(
'Explain recursion.',
(chunk) => process.stdout.write(chunk),
{ model: 'gemini-3-flash-preview' }
);
console.log('\nTotal length:', fullText.length);Parameters:
| Parameter | Type | Description |
|-----------|------|-------------|
| input | string \| Message[] | Same as chat() |
| onChunk | (text: string) => void | Called with each text delta |
| options | ChatOptions | Same as chat() |
Returns: Promise<string> — the full accumulated response
chatWithImage(text, imageUrl, options?)
Send a text question alongside an image (multimodal).
const algochat = require('algochat-ai');
// With a public image URL
const desc = await algochat.chatWithImage(
'What do you see in this image?',
'https://example.com/photo.jpg'
);
// With a base64 data URI
const fs = require('fs');
const imageData = fs.readFileSync('photo.png');
const base64Uri = `data:image/png;base64,${imageData.toString('base64')}`;
const analysis = await algochat.chatWithImage(
'Describe this image in detail.',
base64Uri
);Parameters:
| Parameter | Type | Description |
|-----------|------|-------------|
| text | string | Your question about the image |
| imageUrl | string | https:// URL or data:image/...;base64,... URI |
| options | ChatOptions | Same as chat() |
Returns: Promise<string>
createCompletion(messages, options?)
Returns a full OpenAI-compatible chat completion object. Useful when you want the same shape as openai.chat.completions.create().
const algochat = require('algochat-ai');
const completion = await algochat.createCompletion([
{ role: 'user', content: 'Say hello!' }
]);
console.log(completion.choices[0].message.content);
// → "Hello! How can I help you today?"
console.log(completion.usage);
// → { prompt_tokens: ..., completion_tokens: ..., total_tokens: ... }Returns: Promise<ChatCompletion> — OpenAI-compatible object
AlgoChatClient (class)
For advanced use cases, create a custom client instance with its own session cache.
const { AlgoChatClient } = require('algochat-ai');
const client = new AlgoChatClient({
debug: true, // Verbose logging
timeout: 60000, // 60s timeout
});
// Get the raw SSE stream
const { stream, chatId } = await client.createChatStream({
model: 'gemini-3-flash-preview',
messages: [{ role: 'user', content: 'Hello!' }],
systemPrompt: 'You are a helpful assistant.',
});
// Option A: consume as text
const text = await client.streamToText(stream);
console.log(text);
// Option B: process stream manually
stream.on('data', (chunk) => {
// raw SSE chunks from algochat.app
});
stream.on('end', () => console.log('Done'));Constructor options
| Option | Type | Description |
|--------|------|-------------|
| debug | boolean | Enable verbose logging |
| timeout | number | Request timeout in ms |
Methods
| Method | Returns | Description |
|--------|---------|-------------|
| createChatStream(params) | Promise<{stream, chatId}> | Start a chat and get the raw SSE stream |
| streamToText(stream) | Promise<string> | Collect full text from an SSE stream |
| uploadImage(imageSource, cookies, filename?) | Promise<UploadedFile> | Upload an image to AlgoChat |
Model Utilities
const { resolveModel, getModelList, ALGOCHAT_MODELS, DEFAULT_MODEL } = require('algochat-ai');
// Resolve any model name → AlgoChat model ID
resolveModel('gpt-4o'); // → 'gemini-3-flash-preview'
resolveModel('gemini-3-flash-preview'); // → 'gemini-3-flash-preview'
resolveModel('unknown'); // → 'gemini-3-flash-preview' (fallback)
// Get all models in OpenAI format
const models = getModelList();
// Raw model definitions
console.log(ALGOCHAT_MODELS);
// Default model
console.log(DEFAULT_MODEL); // 'gemini-3-flash-preview'🤖 Available Models
| Model ID | Name | Context | Free? |
|----------|------|---------|-------|
| gemini-3-flash-preview | Gemini 3 Flash Preview | 128K tokens | ✅ Yes |
| gemini-2.5-pro | Gemini 2.5 Pro | 200K tokens | ❌ Needs AlgoChat login |
| gemini-2.0-flash | Gemini 2.0 Flash | 128K tokens | ❌ Needs AlgoChat login |
OpenAI Aliases (all map to gemini-3-flash-preview):
gpt-4o, gpt-4, gpt-3.5-turbo, gpt-4-turbo, gpt-4o-mini
Note: Only
gemini-3-flash-previewworks without an AlgoChat account. The other models require a valid logged-in session, which this package doesn't support yet.
📱 Baileys WhatsApp Integration
This is the primary use case — building a WhatsApp bot using @whiskeysockets/baileys.
Prerequisites
npm install @whiskeysockets/baileys algochat-ai qrcode-terminal pinoComplete Working Bot
const {
default: makeWASocket,
DisconnectReason,
useMultiFileAuthState,
fetchLatestBaileysVersion,
} = require('@whiskeysockets/baileys');
const algochat = require('algochat-ai');
const qrcode = require('qrcode-terminal');
const P = require('pino');
// Per-user conversation history (keeps last 10 exchanges)
const history = new Map();
const MAX_HISTORY = 10;
async function connectToWhatsApp() {
const { state, saveCreds } = await useMultiFileAuthState('./auth_info');
const { version } = await fetchLatestBaileysVersion();
const sock = makeWASocket({
version,
auth: state,
logger: P({ level: 'silent' }),
printQRInTerminal: false,
});
// Show QR code
sock.ev.on('connection.update', ({ connection, lastDisconnect, qr }) => {
if (qr) {
console.log('Scan this QR code:');
qrcode.generate(qr, { small: true });
}
if (connection === 'open') console.log('✅ WhatsApp connected!');
if (connection === 'close') {
const shouldReconnect =
lastDisconnect?.error?.output?.statusCode !== DisconnectReason.loggedOut;
if (shouldReconnect) connectToWhatsApp();
}
});
sock.ev.on('creds.update', saveCreds);
// Handle incoming messages
sock.ev.on('messages.upsert', async ({ messages, type }) => {
if (type !== 'notify') return;
for (const msg of messages) {
if (msg.key.fromMe) continue;
const jid = msg.key.remoteJid;
const userMsg = msg.message?.conversation
|| msg.message?.extendedTextMessage?.text;
if (!userMsg?.trim()) continue;
// /clear command to reset conversation
if (userMsg.trim() === '/clear') {
history.delete(jid);
await sock.sendMessage(jid, { text: '🗑️ Chat history cleared!' }, { quoted: msg });
continue;
}
console.log(`💬 ${jid}: ${userMsg}`);
await sock.sendPresenceUpdate('composing', jid); // "typing..." indicator
try {
// Get or create user's conversation history
const userHistory = history.get(jid) || [];
// Build full messages array with history
const messages_to_send = [
{
role: 'system',
content: 'You are a helpful WhatsApp assistant. Keep responses concise and friendly.',
},
...userHistory,
{ role: 'user', content: userMsg },
];
const reply = await algochat.chat(messages_to_send);
// Save to history
userHistory.push({ role: 'user', content: userMsg });
userHistory.push({ role: 'assistant', content: reply });
if (userHistory.length > MAX_HISTORY * 2) userHistory.splice(0, 2);
history.set(jid, userHistory);
await sock.sendPresenceUpdate('paused', jid);
await sock.sendMessage(jid, { text: reply }, { quoted: msg });
console.log(`🤖 Replied: ${reply.slice(0, 80)}...`);
} catch (err) {
console.error('Error:', err.message);
await sock.sendPresenceUpdate('paused', jid);
await sock.sendMessage(
jid,
{ text: '⚠️ Sorry, something went wrong. Please try again.' },
{ quoted: msg }
);
}
}
});
}
connectToWhatsApp();Group Chat Support
To respond in group chats only when tagged or when message starts with !:
// Inside the message handler:
const isGroup = jid.endsWith('@g.us');
if (isGroup && !userMsg.startsWith('!')) continue; // Skip untagged group messages
const cleanMsg = userMsg.startsWith('!') ? userMsg.slice(1).trim() : userMsg;Handling Image Messages
const caption = msg.message?.imageMessage?.caption;
const imageMsg = msg.message?.imageMessage;
if (imageMsg) {
// Download image
const buffer = await downloadMediaMessage(msg, 'buffer', {});
const base64 = `data:${imageMsg.mimetype};base64,${buffer.toString('base64')}`;
const reply = await algochat.chatWithImage(
caption || 'What is in this image?',
base64
);
await sock.sendMessage(jid, { text: reply }, { quoted: msg });
}🔌 OpenAI SDK Compatibility
Since algochat-ai uses the same message format as OpenAI, you can use it as a drop-in replacement:
Replace the OpenAI SDK
// BEFORE (using openai SDK — needs API key)
const OpenAI = require('openai');
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
const text = completion.choices[0].message.content;
// ─────────────────────────────────────────────────────────────────
// AFTER (using algochat-ai — no API key needed!)
const algochat = require('algochat-ai');
const text = await algochat.chat('Hello!');
// or for the same response shape:
const completion = await algochat.createCompletion([
{ role: 'user', content: 'Hello!' }
]);
const text2 = completion.choices[0].message.content;With LangChain (custom LLM wrapper)
const { BaseChatModel } = require('@langchain/core/language_models/chat_models');
const algochat = require('algochat-ai');
// Use algochat-ai as LangChain LLM backend
class AlgoChatLLM {
async call(prompt) {
return algochat.chat(prompt);
}
}💬 Conversation History
Pass a full messages array to maintain context across turns:
const algochat = require('algochat-ai');
// Build history manually
const conversation = [
{ role: 'system', content: 'You are a concise math tutor.' },
{ role: 'user', content: 'What is 5 + 3?' },
{ role: 'assistant', content: '5 + 3 = 8.' },
{ role: 'user', content: 'Now double that.' },
];
const reply = await algochat.chat(conversation);
console.log(reply); // "8 doubled is 16."Tip: Keep history to the last 10–20 exchanges to avoid exceeding context limits.
🖼️ Multimodal Images
Three ways to send images:
1. Convenience helper
const reply = await algochat.chatWithImage(
'What objects are in this photo?',
'https://example.com/photo.jpg'
);2. Inline content array (OpenAI style)
const reply = await algochat.chat([
{
role: 'user',
content: [
{ type: 'text', text: 'Describe this image.' },
{ type: 'image_url', image_url: { url: 'https://example.com/photo.jpg' } },
],
},
]);3. Base64 data URI
const fs = require('fs');
const image = fs.readFileSync('./photo.png');
const uri = `data:image/png;base64,${image.toString('base64')}`;
const reply = await algochat.chatWithImage('What is in this image?', uri);🛡️ Error Handling
const algochat = require('algochat-ai');
try {
const reply = await algochat.chat('Hello!');
console.log(reply);
} catch (err) {
if (err.response) {
// HTTP error from algochat.app
console.error('HTTP Error:', err.response.status, err.response.data);
} else if (err.code === 'ECONNABORTED') {
console.error('Request timed out. Try again.');
} else {
console.error('Error:', err.message);
}
}Common errors:
| Error | Cause | Fix |
|-------|-------|-----|
| ECONNABORTED | Request timed out | Increase timeout option |
| 403 | AlgoChat blocked the request | The library auto-retries once; if it persists, try again later |
| 500 | AlgoChat server error | Usually temporary; retry |
| Invalid base64 data URI | Malformed image data | Ensure format: data:image/jpeg;base64,... |
🔷 TypeScript Usage
Full type support is included:
import { chat, chatStream, chatWithImage, createCompletion, AlgoChatClient } from 'algochat-ai';
import type { Message, ChatOptions, ChatCompletion } from 'algochat-ai';
// Typed messages
const messages: Message[] = [
{ role: 'system', content: 'You are helpful.' },
{ role: 'user', content: 'Hello!' },
];
const reply: string = await chat(messages);
// Typed completion
const completion: ChatCompletion = await createCompletion(messages);
// Custom client
const client = new AlgoChatClient({ debug: false, timeout: 60000 });
const { stream } = await client.createChatStream({ model: 'gemini-3-flash-preview', messages });
const text = await client.streamToText(stream);📁 Examples Index
| File | Description |
|------|-------------|
| examples/basic.js | Simple chat() calls |
| examples/streaming.js | Token-by-token streaming |
| examples/multimodal.js | Image analysis |
| examples/advanced.js | Custom client, completions, error handling |
| examples/whatsapp-baileys.js | Full Baileys WhatsApp bot |
Run any example:
node examples/basic.js
node examples/streaming.js
node examples/whatsapp-baileys.js⚙️ How It Works
This package reverse-engineers the algochat.app API flow:
1. GET https://algochat.app/api/csrf
→ Receives csrf_token cookie
2. POST https://algochat.app/api/session
→ Receives zola_sid (anonymous user session)
3. POST https://algochat.app/api/create-chat
{ title: "New Chat", model: "gemini-3-flash-preview" }
→ Returns chatId
4. [Optional] POST https://algochat.app/api/files
→ Upload image, returns fileId
5. POST https://algochat.app/api/chat
{ chatId, userId, model, messages, ... }
→ SSE stream of text-delta events
6. Parse SSE stream:
data: {"type":"text-delta","delta":"Hello"}
data: {"type":"finish"}
data: [DONE]Sessions are cached for 25 minutes and automatically refreshed. On 401/403 errors, the session is invalidated and re-fetched before retrying once.
⚠️ Limitations
- 🔓 Anonymous only — Only the
gemini-3-flash-previewmodel works without authentication - 🚫 File uploads — The
/api/filesendpoint returns 403 for anonymous sessions; use public image URLs or base64 data URIs inline in messages instead - 📡 Upstream dependent — This package depends on algochat.app's undocumented API, which may change without notice
- 🔁 Rate limiting — AlgoChat may rate-limit aggressive usage; add delays if needed
- 📏 Context — Each chat is a new session (no server-side memory); pass history yourself in the messages array
🤝 Contributing
This is an unofficial client. Contributions, bug reports, and PRs are welcome!
📄 License
MIT — see LICENSE
Disclaimer: This is an unofficial package and is not affiliated with or endorsed by algochat.app. Use responsibly.
