orbitalsai
v1.3.1
Published
Official JavaScript/TypeScript SDK for Orbitals AI API
Maintainers
Readme
Orbitals AI SDK
Official JavaScript/TypeScript SDK for the Orbitals AI API. Process audio, manage billing, and integrate AI-powered features into your applications with ease.
Features
- 🎯 Simple & Intuitive API - Easy-to-use methods for all Orbitals AI features
- 🎙️ Real-Time Streaming - Live audio transcription with WebSocket support
- 🔒 Type-Safe - Full TypeScript support with comprehensive type definitions
- 🔄 Automatic Retries - Built-in retry logic with exponential backoff
- ⚡ Promise-based - Modern async/await syntax
- 🌐 Universal - Works in Node.js and browser environments
- 📦 Minimal Dependencies - Lightweight with axios and ws
- 🛡️ Error Handling - Detailed error classes for better debugging
- 🌍 Multi-Language - Support for English, Hausa, Igbo, and Yoruba streaming
Installation
npm install orbitalsaior with yarn:
yarn add orbitalsaior with pnpm:
pnpm add orbitalsaiQuick Start
import { OrbitalsClient } from "orbitalsai";
// Initialize the client with your API key
const client = new OrbitalsClient({
apiKey: "your-api-key-here",
});
// Upload and transcribe audio
const file = /* your audio file */;
const upload = await client.audio.upload(file);
// Wait for transcription to complete
const result = await client.audio.waitForCompletion(upload.task_id);
console.log("Transcription:", result.result_text);Real-Time Streaming Transcription
import { StreamingClient } from "orbitalsai";
const streaming = new StreamingClient("your-api-key-here");
// Handle real-time transcription events
streaming.on("partial", (msg) => console.log("Partial:", msg.text));
streaming.on("final", (msg) => console.log("Final:", msg.text));
// Connect and start streaming
await streaming.connect({ language: "english" });
// Send audio data (Int16 PCM, 16kHz, mono)
streaming.sendAudio(audioBuffer);
// Disconnect when done
streaming.disconnect();Getting Your API Key
- Sign up at Orbitals AI
- Navigate to your dashboard
- Go to API Keys section
- Click Create New API Key
- Copy your API key and use it to initialize the SDK
Usage
Initialization
import { OrbitalsClient } from "orbitalsai";
const client = new OrbitalsClient({
apiKey: "your-api-key-here",
timeout: 30000, // optional - request timeout in ms
maxRetries: 3, // optional - number of retry attempts
debug: false, // optional - enable debug logging
});Streaming Transcription (Real-Time ASR)
The SDK supports real-time audio transcription via WebSocket connection. This is ideal for live audio streams, voice interfaces, and real-time captioning.
Supported Languages: English, Hausa, Igbo, Yoruba
Basic Streaming Setup
import { StreamingClient, STREAMING_LANGUAGES } from "orbitalsai";
// Create streaming client with your API key
const streaming = new StreamingClient("your-api-key-here");
// Register event handlers
streaming.on("ready", (msg) => {
console.log("Connected! Session:", msg.session_id);
});
streaming.on("partial", (msg) => {
// Real-time partial results (updates as speech is recognized)
process.stdout.write(`\r${msg.text}`);
});
streaming.on("final", (msg) => {
// Final result for a speech segment (msg.timestamps when returnTimestamps: true)
console.log(`\nFinal: ${msg.text}`);
if (msg.timestamps?.length) {
msg.timestamps.forEach((w) =>
console.log(` ${w.start.toFixed(2)}s–${w.end.toFixed(2)}s: ${w.text}`)
);
}
console.log(
`Cost: $${msg.cost.toFixed(6)} | Duration: ${msg.audio_seconds}s`
);
});
streaming.on("error", (msg) => {
console.error("Error:", msg.message);
});
// Connect to streaming server
await streaming.connect({
language: "english", // "english" | "hausa" | "igbo" | "yoruba"
sampleRate: 16000, // Audio sample rate (default: 16000)
returnTimestamps: true, // Optional: get word-level timestamps in final events
debug: false, // Enable debug logging
});
// Check if connected
console.log("Connected:", streaming.isConnected);Sending Audio Data
Audio must be sent as raw PCM data: Int16, mono, 16kHz.
// From a Buffer (Node.js)
streaming.sendAudio(audioBuffer);
// From an ArrayBuffer (Browser)
streaming.sendAudio(audioArrayBuffer);
// From Uint8Array
streaming.sendAudio(uint8Array);Browser Example with MediaRecorder
const streaming = new StreamingClient("your-api-key");
// Connect first
await streaming.connect({ language: "english" });
// Request microphone access
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
// Create AudioContext for resampling to 16kHz
const audioContext = new AudioContext({ sampleRate: 16000 });
const source = audioContext.createMediaStreamSource(stream);
const processor = audioContext.createScriptProcessor(4096, 1, 1);
processor.onaudioprocess = (e) => {
const inputData = e.inputBuffer.getChannelData(0);
// Convert Float32 to Int16
const int16Data = new Int16Array(inputData.length);
for (let i = 0; i < inputData.length; i++) {
int16Data[i] = Math.max(-32768, Math.min(32767, inputData[i] * 32768));
}
streaming.sendAudio(int16Data.buffer);
};
source.connect(processor);
processor.connect(audioContext.destination);Node.js Example with Microphone
import { StreamingClient } from "orbitalsai";
import * as mic from "mic"; // npm install mic
const streaming = new StreamingClient(process.env.ORBITALS_API_KEY!);
streaming.on("partial", (msg) => process.stdout.write(`\r${msg.text}`));
streaming.on("final", (msg) => console.log(`\n✓ ${msg.text}`));
await streaming.connect({ language: "english" });
// Start microphone capture (requires sox or arecord installed)
const micInstance = mic({
rate: "16000",
channels: "1",
bitwidth: "16",
encoding: "signed-integer",
});
const micStream = micInstance.getAudioStream();
micStream.on("data", (data: Buffer) => streaming.sendAudio(data));
micInstance.start();
// Stop after 30 seconds
setTimeout(() => {
micInstance.stop();
streaming.disconnect();
}, 30000);Changing Language
// Switch language during streaming
streaming.setLanguage("hausa");
streaming.setLanguage("yoruba");
streaming.setLanguage("igbo");
// Get all supported languages
import { STREAMING_LANGUAGES } from "orbitalsai";
console.log(STREAMING_LANGUAGES); // ["english", "hausa", "igbo", "yoruba"]Flushing and Session Stats
// Force finalize current speech segment
streaming.flush();
// Get session statistics
const stats = streaming.getStats();
console.log({
sessionId: stats.sessionId,
totalAudioSeconds: stats.totalAudioSeconds,
totalCost: stats.totalCost,
isConnected: stats.isConnected,
currentLanguage: stats.currentLanguage,
});
// Reset statistics
streaming.resetStats();All Streaming Events
streaming.on("ready", (msg) => {
// Connection established
// msg.session_id, msg.supported_languages
});
streaming.on("partial", (msg) => {
// Real-time partial transcription
// msg.text
});
streaming.on("final", (msg) => {
// Final transcription for segment
// msg.text, msg.cost, msg.audio_seconds, msg.remaining_percent
});
streaming.on("speechStart", () => {
// Voice activity detected
});
streaming.on("speechEnd", () => {
// Voice activity ended
});
streaming.on("languageSet", (msg) => {
// Language change confirmed
// msg.language
});
streaming.on("creditsWarning", (msg) => {
// Low credits warning
// msg.remaining_percent
});
streaming.on("creditsCritical", (msg) => {
// Critical credits warning
// msg.remaining_percent
});
streaming.on("creditsExhausted", () => {
// Credits exhausted, connection will close
});
streaming.on("flushed", () => {
// Flush completed
});
streaming.on("error", (msg) => {
// Error occurred
// msg.message
});
streaming.on("disconnect", (code, reason) => {
// Disconnected from server
});
streaming.on("connect", () => {
// Connected to server (before ready)
});Audio Processing (File Upload)
Upload Audio for Transcription
// Browser environment
const fileInput = document.querySelector('input[type="file"]');
const file = fileInput.files[0];
const upload = await client.audio.upload(file);
console.log("Task ID:", upload.task_id);
// Node.js environment with options
import fs from "fs";
const audioBuffer = fs.readFileSync("./audio.mp3");
const upload = await client.audio.upload(audioBuffer, "audio.mp3", {
generate_srt: true, // Optional: generate subtitle file
language: "english", // Optional: specify language (default: "english")
model_name: "Perigee-1", // Optional: AI model to use (default: "Perigee-1")
});Check Transcription Status
const status = await client.audio.getStatus(taskId);
console.log("Status:", status.status);
if (status.status === "completed") {
console.log("Transcription:", status.result_text);
if (status.srt_content) {
console.log("SRT subtitles:", status.srt_content);
}
}Wait for Completion
// Automatically polls until completion or timeout
const result = await client.audio.waitForCompletion(
taskId,
2000, // poll interval in ms (optional, default: 2000)
300000 // max wait time in ms (optional, default: 300000 - 5 minutes)
);
console.log("Transcription:", result.result_text);Get All Tasks
// Get tasks with pagination
const response = await client.audio.getTasks({ page: 1, page_size: 20 });
response.items.forEach((task) => {
console.log(`${task.original_filename}: ${task.status}`);
});
console.log(
`Page ${response.pagination.page} of ${response.pagination.total_pages}`
);
console.log(`Total items: ${response.pagination.total_items}`);Get Available Models
// Fetch all available transcription models
const models = await client.audio.getModels();
models.forEach((model) => {
console.log(`${model.model_name}`);
console.log(` Rate: $${model.transcription_rate_per_hour}/hour`);
console.log(` Active: ${model.is_active}`);
});Billing
Get Account Balance
const balance = await client.billing.getBalance();
console.log(`Balance: $${balance.balance}`);
console.log(`Last updated: ${balance.last_updated}`);Get Daily Usage History
// Get usage with date range
const usage = await client.billing.getUsageHistory({
start_date: "2024-01-01",
end_date: "2024-01-31",
page: 1,
page_size: 30,
});
console.log(`Total records: ${usage.total_records}`);
console.log("Period summary:", usage.period_summary);
usage.records.forEach((record) => {
console.log(`${record.date}:`);
console.log(
` Transcription: ${record.transcription_usage} ($${record.transcription_cost})`
);
console.log(` Total: $${record.total_cost}`);
});Get Payment History
const payments = await client.billing.getPaymentHistory({
page: 1,
page_size: 20,
});
console.log("Payment history:", payments);Webhooks
Subscribe to webhook notifications for async task updates:
// Subscribe to webhooks
await client.webhooks.subscribe(
"https://your-app.com/webhooks/orbitals",
"your-webhook-secret"
);
// Get subscriptions
const subscriptions = await client.webhooks.getSubscriptions();
// Unsubscribe
await client.webhooks.unsubscribe("https://your-app.com/webhooks/orbitals");User Management
// Get current user info
const user = await client.user.get();
console.log(`${user.first_name} ${user.last_name}`);
console.log(`Verified: ${user.is_verified}`);
// Update user info
const updated = await client.user.update({
first_name: "John",
last_name: "Doe",
});Error Handling
The SDK provides detailed error classes for different scenarios:
import {
OrbitalsClient,
AuthenticationError,
ValidationError,
RateLimitError,
NetworkError,
} from "orbitalsai";
const client = new OrbitalsClient({ apiKey: "your-api-key" });
try {
const result = await client.audio.upload(file);
} catch (error) {
if (error instanceof AuthenticationError) {
console.error("Invalid API key:", error.message);
} else if (error instanceof ValidationError) {
console.error("Invalid request:", error.message, error.details);
} else if (error instanceof RateLimitError) {
console.error("Rate limited. Retry after:", error.retryAfter);
} else if (error instanceof NetworkError) {
console.error("Network error:", error.message);
} else {
console.error("Unknown error:", error);
}
}Error Types
AuthenticationError- Invalid or missing API key (401)AuthorizationError- Insufficient permissions (403)NotFoundError- Resource not found (404)ValidationError- Invalid request parameters (422)RateLimitError- Too many requests (429)NetworkError- Connection or timeout issuesServerError- Internal server error (500+)OrbitalsError- Base error class
Advanced Examples
Complete Audio Processing Workflow
import { OrbitalsClient } from "orbitalsai";
import fs from "fs";
async function processAudio(filePath: string) {
const client = new OrbitalsClient({
apiKey: process.env.ORBITALS_API_KEY!,
});
try {
// 1. Upload file
console.log("Uploading audio...");
const audioBuffer = fs.readFileSync(filePath);
const prediction = await client.audio.upload(audioBuffer, "audio.mp3");
console.log("Task created:", prediction.task_id);
// 2. Wait for completion
console.log("Processing...");
const result = await client.audio.waitForCompletion(prediction.task_id);
console.log("Processing complete!");
// 3. Check balance
const balance = await client.billing.getBalance();
console.log(`Remaining balance: ${balance.balance}`);
return result;
} catch (error) {
console.error("Error processing audio:", error);
throw error;
}
}
processAudio("./my-audio.mp3")
.then((result) => console.log("Result:", result))
.catch((error) => console.error("Failed:", error));Using with Express.js
import express from "express";
import { OrbitalsClient } from "orbitalsai";
import multer from "multer";
const app = express();
const upload = multer({ storage: multer.memoryStorage() });
const client = new OrbitalsClient({
apiKey: process.env.ORBITALS_API_KEY!,
});
app.post("/api/transcribe", upload.single("audio"), async (req, res) => {
try {
if (!req.file) {
return res.status(400).json({ error: "No file uploaded" });
}
// Upload to Orbitals AI
const prediction = await client.audio.predict(
req.file.buffer,
req.file.originalname
);
// Return task ID immediately
res.json({ task_id: prediction.task_id });
} catch (error) {
console.error("Error:", error);
res.status(500).json({ error: "Failed to process audio" });
}
});
app.get("/api/status/:taskId", async (req, res) => {
try {
const status = await client.audio.getStatus(req.params.taskId);
res.json(status);
} catch (error) {
console.error("Error:", error);
res.status(500).json({ error: "Failed to get status" });
}
});
app.listen(3000, () => {
console.log("Server running on port 3000");
});Using with React
import { useState } from "react";
import { OrbitalsClient, PredictionData } from "orbitalsai";
const client = new OrbitalsClient({
apiKey: import.meta.env.VITE_ORBITALS_API_KEY,
});
function AudioUploader() {
const [result, setResult] = useState<PredictionData | null>(null);
const [loading, setLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
const handleUpload = async (file: File) => {
try {
setLoading(true);
setError(null);
// Upload and wait for completion
const prediction = await client.audio.predict(file);
const completed = await client.audio.waitForCompletion(
prediction.task_id
);
setResult(completed);
} catch (err) {
setError(err instanceof Error ? err.message : "Unknown error");
} finally {
setLoading(false);
}
};
return (
<div>
<input
type="file"
accept="audio/*"
onChange={(e) => e.target.files?.[0] && handleUpload(e.target.files[0])}
disabled={loading}
/>
{loading && <p>Processing...</p>}
{error && <p>Error: {error}</p>}
{result && <pre>{JSON.stringify(result, null, 2)}</pre>}
</div>
);
}Batch Processing
async function processBatch(files: File[]) {
const client = new OrbitalsClient({
apiKey: process.env.ORBITALS_API_KEY!,
});
// Upload all files
const predictions = await Promise.all(
files.map((file) => client.audio.predict(file))
);
console.log(`Uploaded ${predictions.length} files`);
// Wait for all to complete
const results = await Promise.all(
predictions.map((pred) => client.audio.waitForCompletion(pred.task_id))
);
return results;
}TypeScript Support
The SDK is written in TypeScript and provides comprehensive type definitions:
import type {
OrbitalsConfig,
PredictionResponse,
PredictionData,
BillingBalance,
AudioRecordsResponse,
} from "orbitalsai";
// All types are exported for your convenienceAPI Reference
OrbitalsClient
Constructor
new OrbitalsClient(config: OrbitalsConfig)Audio Methods
audio.upload(file, filename?, options?)- Upload audio for transcriptionaudio.getStatus(taskId)- Get transcription statusaudio.getTasks()- Get all transcription tasksaudio.waitForCompletion(taskId, pollInterval?, maxWaitTime?)- Wait for completion
Billing Methods
billing.getBalance()- Get account balancebilling.getUsageHistory(options?)- Get daily usage historybilling.getPaymentHistory(options?)- Get payment history (view only)
Webhook Methods
webhooks.subscribe(url, secret)- Subscribe to webhook notificationswebhooks.getSubscriptions()- Get webhook subscriptionswebhooks.unsubscribe(url)- Unsubscribe from webhooks
User Methods
user.get()- Get current user informationuser.update(data)- Update user information
StreamingClient
Constructor
new StreamingClient(apiKey: string, baseURL?: string)Methods
connect(config?)- Connect to streaming serverdisconnect()- Disconnect from serversendAudio(data)- Send audio data (Int16 PCM, 16kHz, mono)setLanguage(language)- Change transcription languageflush()- Force finalize current segmentgetStats()- Get session statisticsresetStats()- Reset session statisticson(event, handler)- Register event handleroff(event)- Remove event handler
Static Methods
StreamingClient.getSupportedLanguages()- Get supported languages
Properties
isConnected- Check if connected to server
Environment Variables
For security, use environment variables to store your API key:
# .env
ORBITALS_API_KEY=your-api-key-hereThen in your code:
const client = new OrbitalsClient({
apiKey: process.env.ORBITALS_API_KEY!,
});Best Practices
- Store API Keys Securely - Never commit API keys to version control
- Use Environment Variables - Store keys in
.envfiles - Handle Errors Properly - Always wrap API calls in try-catch blocks
- Implement Retries - The SDK handles retries automatically, but you can adjust
maxRetries - Monitor Usage - Regularly check your balance and usage history
- Use TypeScript - Take advantage of type safety for better development experience
Browser Support
The SDK works in all modern browsers that support ES2020 features:
- Chrome 80+
- Firefox 75+
- Safari 13.1+
- Edge 80+
For older browsers, you may need to use polyfills.
Node.js Support
Requires Node.js 16.0.0 or higher.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT © Orbitals AI
Support
- 📧 Email: [email protected]
- 📖 Documentation: https://www.orbitalsai.com/docs
- 🐛 Issues: https://github.com/ProjectAfri/orbitalsai-JS-SDK/issues
Changelog
See CHANGELOG.md for detailed version history.
Latest: 1.3.1 (2026-02-04)
- 📄 README: version history corrected (1.2.0 entry added)
1.3.0 (2026-02-04)
- ⏱️ Streaming word-level timestamps - Optional
returnTimestamps: truefor word start/end times in final events
1.2.0 (2026-02-04)
- 🎙️ Real-Time Streaming Transcription - Live audio transcription via WebSocket
- 🌍 Streaming supports English, Hausa, Igbo, and Yoruba languages
- 📊 Session statistics tracking (cost, duration)
- 🔄 Auto-reconnection with configurable retries
- 🎯 Event-driven API with partial and final transcription results
1.1.0 (2025-11-25)
- ✨ AI Model Selection Support with Perigee-1 model
- 🤖 Get Available Models endpoint
- 📋 Enhanced pagination for task listing
- 🌍 Support for multiple languages (English, Hausa, Igbo, Yoruba, Pidgin, Swahili, Kinyarwanda)
1.0.0 (2024-01-01)
- Initial release
- Audio processing support
- Billing management
- YouTube processing
- User management
- Comprehensive TypeScript support
