onyxai-sdk
v0.1.2
Published
Browser SDK for Onyx/Burki voice AI — connect to your backend WebSocket for HD voice calls with call-start, speech-start/end, interruption, and transcript events.
Maintainers
Readme
onyxai-sdk
Browser SDK for Onyx voice AI. Connect to your Onyx backend over WebSocket for HD voice calls with a VAPI-like API: call-start, call-end, speech-start, speech-end, interruption, and transcript events.
- No phone numbers — browser-only; uses L16 PCM 16kHz (wideband HD), not telephone codecs.
- Framework-agnostic client — use
BurkiClientin any app. - React hook — optional
useBurkiCallfor React/Next.js.
Install
npm install onyxai-sdkFor React projects, ensure react is installed (peer dependency).
Usage
Core client (any framework)
import { BurkiClient } from "onyxai-sdk";
const client = new BurkiClient({
baseUrl: "https://api.your-onyx.com", // or process.env.NEXT_PUBLIC_API_URL
debug: true,
});
await client.start("123", {
variableValues: {
candidate_name: "John Smith",
job_title: "Software Engineer",
},
});
client.on("call-start", (data) => console.log("Connected", data));
client.on("speech-start", () => console.log("AI speaking"));
client.on("speech-end", () => console.log("AI done"));
client.on("transcript", (t) => console.log(t.speaker, t.content));
client.on("call-end", (data) => console.log("Ended", data.duration));
client.on("error", (e) => console.error(e.message));
// Later:
client.stop();React hook
import { useBurkiCall } from "onyxai-sdk";
function InterviewCall() {
const { callStatus, isSpeaking, transcript, error, start, stop } = useBurkiCall({
baseUrl: process.env.NEXT_PUBLIC_API_URL,
onCallStart: () => console.log("Call started"),
onTranscript: (t) => console.log(t.speaker, t.content),
});
return (
<div>
<p>Status: {callStatus}</p>
{callStatus === "idle" && (
<button onClick={() => start(assistantId, { variableValues: { name: "Jane" } })}>
Start call
</button>
)}
{callStatus === "connected" && (
<button onClick={stop}>End call</button>
)}
{isSpeaking && <p>AI is speaking…</p>}
{error && <p>Error: {error}</p>}
</div>
);
}API
BurkiClient
| Method / getter | Description |
|-----------------|-------------|
| start(assistantId, options?) | Start a call (requests mic, opens WebSocket). |
| stop() | End the call and cleanup. |
| setMuted(muted) / toggleMute() | Mute/unmute microphone. |
| on(event, callback) / off(event, callback) | Subscribe to events. |
| status | "idle" | "connecting" | "connected" | "ended". |
| isMuted | Current mute state. |
Events
call-start— Call connected; payload:{ assistantId?, assistantName? }.call-end— Call ended; payload:{ duration?, endedReason? }.speech-start/speech-end— AI started/stopped speaking.interruption— User interrupted the AI.transcript— New transcript; payload:{ speaker, content, isFinal, timestamp }.error— Error; payload:{ message, code? }.status-change— Status changed; payload:{ status }.
useBurkiCall (React)
Returns: callStatus, isMuted, isSpeaking, transcript, error, start, stop, toggleMute, setMuted. Options accept the same baseUrl / debug as the client plus optional callbacks: onCallStart, onCallEnd, onSpeechStart, onSpeechEnd, onInterruption, onTranscript, onError.
Backend
This SDK expects an Onyx backend that exposes a WebSocket at {baseUrl}/streams and sends the events above. It uses the same protocol as the Onyx frontend (start with provider: "browser", mediaFormat: { encoding: "audio/x-l16", sampleRate: 16000 }).
Publish
From the package directory:
cd packages/onyxai-sdk
npm run build
npm publish --access publicUse scoped name @your-org/onyxai-sdk if you prefer; update name in package.json and publish with npm publish --access public.
