@equus-ai/sdk
v1.1.0
Published
Official JavaScript/TypeScript SDK for EQUUS AI Infrastructure Platform - Document Processing, RAG, Knowledge Graphs, and AI Avatars
Maintainers
Readme
@equus-ai/sdk
Official JavaScript/TypeScript SDK for the EQUUS AI Infrastructure Platform.
EQUUS provides enterprise-grade AI services:
- 🔍 EmbedCore - Document processing, OCR, and multilingual embeddings
- 📊 RAGGRAPH - Knowledge graph building and semantic querying
- 💬 LMS - RAG-augmented chat with multiple LLM providers
- 🎭 Avatar - AI video avatar presentations and streaming
Installation
npm install @equus-ai/sdk
# or
yarn add @equus-ai/sdk
# or
pnpm add @equus-ai/sdkQuick Start
Using EQUUS Gateway (Full Platform)
import { Equus } from '@equus-ai/sdk';
const equus = new Equus({
apiKey: 'eq_live_your_api_key'
});
// Process a document
const doc = await equus.embedcore.process({
file: myFile,
eventId: 'event-123'
});
// Search documents
const results = await equus.embedcore.search({
query: 'keynote speaker schedule',
eventId: 'event-123'
});
// Chat with RAG
const response = await equus.lms.chat({
message: 'What are the main topics?',
eventId: 'event-123'
});
// Start avatar session
const session = await equus.avatar.startSession({
faceId: 'default',
voiceId: 'rachel'
});Using RunPod Direct (GPU-Accelerated Embeddings)
For direct access to GPU-accelerated embedding generation and OCR, bypass the Gateway:
import { EquusRunPod } from '@equus-ai/sdk';
const runpod = new EquusRunPod({
apiKey: 'your-runpod-api-key',
embedcoreEndpoint: '7nnw0dqwt1o920' // Default EQUUS endpoint
});
// Warmup models (recommended on first call)
await runpod.warmup();
// Generate embedding - auto-routes based on language
const english = await runpod.embed('Hello world'); // MiniLM (384d)
const hindi = await runpod.embed('नमस्ते दुनिया'); // MuRIL (768d)
// Batch embeddings (up to 100)
const batch = await runpod.embedBatch(['First text', 'Second text']);
// OCR - extract text from image
const ocr = await runpod.ocr(imageBase64);
// OCR - extract product data
const product = await runpod.ocrProduct(productImageBase64);
// OCR - extract menu items
const menu = await runpod.ocrMenu(menuImageBase64);Services
EmbedCore - Document Processing
// Process document
await equus.embedcore.process({
file: pdfFile,
eventId: 'event-123',
chunkSize: 512
});
// Semantic search
const results = await equus.embedcore.search({
query: 'What time does registration open?',
eventId: 'event-123',
topK: 5
});
// List documents
const docs = await equus.embedcore.listDocuments('event-123');
// Delete document
await equus.embedcore.deleteDocument('doc-id');RAGGRAPH - Knowledge Graphs
// Build knowledge graph
const graph = await equus.raggraph.build({
eventId: 'event-123'
});
// Query graph
const answer = await equus.raggraph.query({
query: 'What are the relationships between speakers?',
eventId: 'event-123',
includeContext: true
});
// Get graph stats
const stats = await equus.raggraph.getStats('event-123');LMS - Chat & RAG
// Simple chat
const response = await equus.lms.chat({
message: 'Summarize the event',
eventId: 'event-123'
});
// Streaming chat
for await (const chunk of equus.lms.chatStream({
message: 'Tell me about the keynote',
eventId: 'event-123'
})) {
process.stdout.write(chunk.content || '');
}
// List models
const models = await equus.lms.listModels();Avatar - AI Video
// Start session
const session = await equus.avatar.startSession({
faceId: 'face-001',
voiceId: 'voice-rachel',
eventId: 'event-123'
});
// Query with video response
const response = await equus.avatar.query({
sessionId: session.sessionId,
query: 'What time is lunch?',
includeVideo: true
});
// Text-to-speech
const speech = await equus.avatar.speak({
sessionId: session.sessionId,
text: 'Welcome to the conference!'
});
// Start streaming to Zoom
await equus.avatar.startStream({
sessionId: session.sessionId,
platform: 'zoom',
layout: 'pip'
});
// List faces and voices
const faces = await equus.avatar.listFaces();
const voices = await equus.avatar.listVoices();
// Stop session
await equus.avatar.stopSession(session.sessionId);Configuration
EQUUS Gateway (Full Platform)
const equus = new Equus({
apiKey: 'eq_live_your_api_key', // Required - EQUUS API Key
baseUrl: 'https://equus-gateway-production.up.railway.app', // Optional
timeout: 30000, // Request timeout in ms (default: 30000)
retries: 3, // Retry attempts (default: 3)
debug: true // Enable debug logging
});RunPod Direct (GPU Embeddings)
const runpod = new EquusRunPod({
apiKey: 'rpa_xxx', // Required - RunPod API Key
embedcoreEndpoint: '7nnw0dqwt1o920', // Optional - default endpoint
timeout: 60000, // Request timeout in ms (default: 60000)
debug: true // Enable debug logging
});Embedding Model Selection
EQUUS EmbedCore uses a hybrid embedding strategy that automatically routes to the best model based on language detection:
| Model | Dimensions | Use Case | Languages |
|-------|------------|----------|-----------|
| paraphrase-multilingual-MiniLM-L12-v2 | 384 | General/Default | 50+ languages |
| l3cube-pune/indic-sentence-bert-nli (MuRIL) | 768 | Indian Languages | Hindi, Nepali, Tamil, +14 more |
// Auto-detection in action:
const english = await runpod.embed('Hello world');
console.log(english.model_used); // 'default'
console.log(english.dimension); // 384
const hindi = await runpod.embed('नमस्ते दुनिया');
console.log(hindi.model_used); // 'indian'
console.log(hindi.dimension); // 768OCR Capabilities
Extract text and structured data from images:
// Basic OCR
const text = await runpod.ocr(imageBase64, ['en', 'hi']);
// Product extraction - returns name, price, SKU, brand
const product = await runpod.ocrProduct(productImage);
console.log(product.name, product.price);
// Menu extraction - returns structured menu items
const menu = await runpod.ocrMenu(menuImage);
menu.items.forEach(item => console.log(item.name, item.price));Health Checks
// Check gateway health
const health = await equus.health();
console.log(health.status); // 'healthy'
// Check all services
const services = await equus.servicesHealth();
console.log(services.embedcore.status);
console.log(services.raggraph.status);
console.log(services.lms.status);
console.log(services.avatar.status);
// RunPod health
const runpodHealth = await runpod.health();
console.log(runpodHealth.gpu_name); // 'NVIDIA GeForce RTX 4090'
console.log(runpodHealth.gpu_memory); // '25.4GB'Error Handling
try {
await equus.embedcore.process({ file, eventId });
} catch (error) {
if (error.status === 401) {
console.error('Invalid API key');
} else if (error.status === 429) {
console.error('Rate limit exceeded');
} else {
console.error('Error:', error.message);
}
}TypeScript Support
Full TypeScript support with exported types:
import {
Equus,
EquusConfig,
EquusRunPod,
RunPodConfig,
ProcessDocumentRequest,
ChatRequest,
ChatResponse,
SearchResult,
EmbeddingResult,
OCRResult,
// ... more types
} from '@equus-ai/sdk';Platform URLs
| Service | URL | |---------|-----| | API Gateway | https://equus-gateway-production.up.railway.app | | EmbedCore (RunPod) | https://api.runpod.ai/v2/7nnw0dqwt1o920 | | Documentation | https://equus-gateway-production.up.railway.app/docs |
Requirements
- Node.js >= 16.0.0
- For browser usage: Modern browser with Fetch API support
License
MIT
Support
- GitHub Issues: https://github.com/whatsupdoc-in/equus-platform/issues
- Documentation: https://equus-gateway-production.up.railway.app/docs
