@northmodellabs/atlas-react
v0.2.2
Published
React hook for Atlas realtime avatars — handles LiveKit room, tracks, mic, transcripts, and cleanup in one call.
Downloads
461
Maintainers
Readme
@northmodellabs/atlas-react
React hook for Atlas Realtime avatars. Handles LiveKit room, video/audio tracks, mic, transcriptions, and cleanup in a single useAtlasSession() call.
Supports two modes:
- Conversation — Atlas handles STT → LLM → TTS. You just connect and talk.
- Passthrough — You bring your own LLM + TTS. Use
publishAudio()to send TTS audio to the avatar for lip-sync.
Install
npm install @northmodellabs/atlas-react livekit-clientConversation mode (Atlas handles everything)
import { useAtlasSession } from "@northmodellabs/atlas-react";
function AvatarPage() {
const session = useAtlasSession({
createSession: async (face) => {
const form = new FormData();
if (face) form.append("face", face);
form.append("mode", "conversation");
const res = await fetch("/api/session", { method: "POST", body: form });
return res.json(); // { sessionId, livekitUrl, token }
},
deleteSession: async (id) => {
await fetch(`/api/session/${id}`, { method: "DELETE" });
},
});
return (
<div>
<div ref={session.videoRef} style={{ width: 512, height: 512 }} />
{session.status === "idle" && (
<button onClick={() => session.connect(myFaceFile)}>Start</button>
)}
{session.status === "connected" && (
<>
<button onClick={() => session.setMicEnabled(session.muted)}>
{session.muted ? "Unmute" : "Mute"}
</button>
<button onClick={session.disconnect}>End</button>
</>
)}
{session.messages.filter(m => m.final).map((msg) => (
<p key={msg.id}><b>{msg.role}:</b> {msg.text}</p>
))}
</div>
);
}Passthrough mode (bring your own LLM + TTS)
import { useAtlasSession } from "@northmodellabs/atlas-react";
function PassthroughPage() {
const session = useAtlasSession({
createSession: async (face) => {
const res = await fetch("/api/session", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ face_url: "https://...", mode: "passthrough" }),
});
return res.json();
},
deleteSession: async (id) => {
await fetch(`/api/session/${id}`, { method: "DELETE" });
},
autoEnableMic: false, // we'll manage audio ourselves
});
const handleChat = async (text: string) => {
// Call your own LLM + TTS backend
const res = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ text }),
});
const { audio } = await res.json(); // base64 MP3/WAV
// Publish TTS audio to the room — avatar lip-syncs to it
// Mic is auto-muted while playing, restored when done
await session.publishAudio(audio);
};
return (
<div>
<div ref={session.videoRef} style={{ width: 512, height: 512 }} />
{session.status === "connected" && (
<button onClick={() => handleChat("Hello, tell me a joke")}>
Ask a question
</button>
)}
</div>
);
}How publishAudio works
- Decodes your audio (base64 string, Blob, or ArrayBuffer)
- Mutes the user's mic so the avatar only lip-syncs to the AI voice
- Publishes the audio as a LiveKit track — the avatar renders lip-sync in real time
- Plays the audio locally so the user hears the response
- When playback ends, unpublishes the track and restores the mic
Returns a handle with stop() to cancel playback early:
const handle = await session.publishAudio(ttsAudio);
// later...
handle.stop(); // immediately stops playback + restores micWhat the hook handles
- Creates a LiveKit
RoomwithadaptiveStreamanddynacast - Subscribes to video and audio tracks, attaches them to the DOM
- Enables microphone after connecting (configurable)
- Listens for
TranscriptionReceivedevents and surfaces them asmessages - Measures round-trip latency
- Disconnects and cleans up on unmount and
beforeunload - Calls your
deleteSessioncallback to tear down the server-side session
API
useAtlasSession(options)
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| createSession | (face, faceUrl) => Promise<SessionInfo> | required | Creates a session on your backend |
| deleteSession | (sessionId) => Promise<void> | — | Tears down the session on your backend |
| autoEnableMic | boolean | true | Auto-enable mic after connecting |
| autoCleanup | boolean | true | Auto-disconnect on unmount / tab close |
Returned AtlasSession
| Field | Type | Description |
|-------|------|-------------|
| status | "idle" \| "connecting" \| "connected" \| "disconnected" \| "error" | Connection state |
| error | string \| null | Error message if status is "error" |
| sessionId | string \| null | Active session ID |
| messages | TranscriptMessage[] | Transcript messages (check .final for completed) |
| muted | boolean | Whether mic is muted |
| volume | number | Playback volume (0–100) |
| latency | number | Round-trip time in ms |
| videoRef | RefObject<HTMLDivElement> | Attach to a <div> — video renders inside |
| room | Room \| null | LiveKit Room instance (null until connected) |
| connect(face?, faceUrl?) | () => Promise<void> | Start a session |
| disconnect() | () => Promise<void> | End the session |
| setMicEnabled(enabled) | (boolean) => void | Mute/unmute mic |
| setVolume(v) | (number) => void | Set playback volume 0–100 |
| sendChat(text) | (string) => void | Send text to server-side agent (conversation mode) |
| publishAudio(audio) | (string \| Blob \| ArrayBuffer) => Promise<AudioPlaybackHandle> | Publish TTS audio for avatar lip-sync (passthrough mode) |
Backend proxy (required)
Your API key must stay server-side. Create a small API route that proxies to Atlas:
// Next.js example: app/api/session/route.ts
const API_KEY = process.env.ATLAS_API_KEY;
const API_URL = process.env.ATLAS_API_URL;
export async function POST(req: Request) {
const form = await req.formData();
const res = await fetch(`${API_URL}/v1/realtime/session`, {
method: "POST",
headers: { Authorization: `Bearer ${API_KEY}` },
body: form,
});
return Response.json(await res.json(), { status: res.status });
}License
Apache-2.0
