@drytis/meeting-sdk
v1.0.13
Published
React TypeScript SDK for Drytis Planning and Lifeguard meeting sessions
Maintainers
Readme
@drytis/meeting-sdk
React SDK for embedding Drytis real-time meeting sessions into your application. Handles Planning (FDE / AI agent), Lifeguard (human engineer), and Direct Meeting flows. Backed by LiveKit.
Table of Contents
- What is this SDK?
- How it fits into the stack
- Requirements
- Installation
- API Keys
- Core Concept — Pre-fetch, then Connect
- Quick Start
- [DrytisCall Component](#drytisc all-component)
- Props Reference
- Feature Flags
- The
metadataProp - Ref / Imperative API
- Call Phase
- Events / Callbacks
- [useDrytisCall Hook](#usedrytisc all-hook)
- ControlBar Component
- Low-level API Utilities
- TypeScript Types
- Full Examples
What is this SDK?
The Drytis Meeting SDK embeds real-time audio/video calls inside your React application. You fetch a session from your backend, pass it to the SDK, and the SDK handles everything from there:
- Connecting to a LiveKit room
- Enabling the local microphone and camera
- Playing remote audio from agents and participants
- Streaming and deduplicating real-time transcripts
- Providing mute, camera, and end-call controls
The SDK makes no API calls. All session management (fetching tokens, managing queues, subscribing to SSE) is done by your application code. The SDK receives a MeetingSession object and connects.
How it fits into the stack
Your Frontend (React)
│
├── Your code fetches session ──→ Your Backend (Next.js API route)
│ (fetch, axios, etc.) └── calls Drytis Meeting Server
│ └── returns LiveKit token
│
├── Pass session to DrytisCall
│ session={{ roomName, token, serverUrl }}
│
└── DrytisCall connects to LiveKit ──→ LiveKit Cloud
(room audio / video / transcripts)Your backend is the gatekeeper — it holds the secret key and creates sessions. The SDK only receives the result and connects.
Requirements
- React 18 or later
- A running Drytis meeting server reachable from the browser
- An API key pair from the Drytis admin panel
Installation
npm install @drytis/meeting-sdk
# or
yarn add @drytis/meeting-sdk
# or
pnpm add @drytis/meeting-sdkAPI Keys
| Key | Prefix | Where it lives | Purpose |
|-----|--------|----------------|---------|
| Public key | drytis_pk_live_… | Client-side — safe to ship in browser code | Sent as X-Drytis-Public-Key on SDK requests. Identifies your project. |
| Secret key | drytis_sk_live_… | Server-side only | Used by your backend to authenticate with the Drytis Meeting Server. |
# .env.local
NEXT_PUBLIC_DRYTIS_PUBLIC_KEY=drytis_pk_live_xxxxx
DRYTIS_SECRET_KEY=drytis_sk_live_xxxxx
NEXT_PUBLIC_DRYTIS_MEETING_SERVER=https://meet.drytis.aiNever pass
DRYTIS_SECRET_KEYtoDrytisCallor any client-side code.
Core Concept — Pre-fetch, then Connect
The SDK's only input is a MeetingSession — a plain object with three fields:
interface MeetingSession {
roomName: string; // LiveKit room name
token: string; // LiveKit participant JWT
serverUrl: string; // wss://your-livekit-instance.livekit.cloud
}You fetch this from your backend. Then you pass it to the SDK.
1. Your code → fetch('/api/fde/planning-session')
2. Your code → get { roomName, token, serverUrl } back
3. Your code → <DrytisCall session={...} />
4. SDK → connects to LiveKit, handles audio/transcripts/controlsThis separation means the SDK never needs to know your server URL, your auth tokens, or your queue logic. Your app controls those; the SDK controls LiveKit.
Quick Start
Step 1 — Your backend creates a session
// app/api/fde/planning-session/route.ts
export async function POST(req: Request) {
const body = await req.json();
const res = await fetch(`${process.env.DRYTIS_MEETING_SERVER}/api/fde/planning-session`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.DRYTIS_SECRET_KEY}`,
},
body: JSON.stringify(body),
});
return Response.json(await res.json(), { status: res.status });
}Step 2 — Your frontend fetches the session and passes it to the SDK
import { useEffect, useRef, useState } from 'react';
import { DrytisCall } from '@drytis/meeting-sdk';
import type { DrytisCallHandle, MeetingSession } from '@drytis/meeting-sdk';
export function PlanningPage({ userToken }: { userToken: string }) {
const ref = useRef<DrytisCallHandle>(null);
const [session, setSession] = useState<MeetingSession | null>(null);
useEffect(() => {
fetch('/api/fde/planning-session', {
method: 'POST',
headers: { Authorization: `Bearer ${userToken}` },
body: JSON.stringify({ participantName: 'Alice' }),
})
.then(r => r.json())
.then(data => setSession({
roomName: data.roomName,
token: data.livekit_token,
serverUrl: data.serverUrl,
}));
}, [userToken]);
useEffect(() => {
if (session) ref.current?.requestSupport();
}, [session]);
return (
<DrytisCall
ref={ref}
session={session}
publicKey={process.env.NEXT_PUBLIC_DRYTIS_PUBLIC_KEY}
metadata={{ participantName: 'Alice' }}
features={{ renderControls: true, mute: true }}
onCallStarted={({ roomName }) => console.log('started in', roomName)}
onCallEnded={({ duration }) => console.log(`ended after ${duration}s`)}
/>
);
}DrytisCall Component
The single component for all meeting types. It:
- Accepts a
sessionprop with the LiveKit connection details - Connects to the LiveKit room when
requestSupport()is called (or whenautoConnectis true) - Manages audio, transcripts, and participant state internally
- Optionally renders a built-in
ControlBarwhenfeatures.renderControls = true - Fires callbacks on every lifecycle event
- Exposes an imperative ref handle for external control
import { DrytisCall } from '@drytis/meeting-sdk';Props Reference
Connection
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| session | MeetingSession \| null | — | The LiveKit session to connect to. Fetch this from your backend. Nothing connects until this is provided and requestSupport() is called. |
| publicKey | string | — | Your public API key (drytis_pk_live_…). Identifies your project. |
Behaviour
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| autoConnect | boolean | false | Automatically connect when session becomes non-null. Equivalent to calling requestSupport() in a useEffect([session]). |
| mode | 'embedded' \| 'redirect' | 'embedded' | 'embedded' renders inline. 'redirect' navigates the browser to {session.serverUrl}/meeting/{session.roomName}. |
| projectId | number \| null | — | Optional project reference — passed through to callbacks for context. |
| metadata | Record<string, unknown> | — | Extra data — see The metadata prop. |
| features | CallFeatures | {} | Controls the built-in UI. See Feature Flags. |
Callbacks
| Prop | Payload | When it fires |
|------|---------|---------------|
| onCallStarted | CallStartedData | Room connected, audio active. |
| onCallEnded | CallEndedData | Room disconnected. |
| onParticipantJoined | ParticipantInfo | Remote participant connects. |
| onParticipantLeft | ParticipantInfo | Remote participant disconnects. |
| onUserSpeaking | UserSpeakingData | New user transcript segment arrives. |
| onAIResponse | AIResponseData | AI agent sends a message. |
| onTranscriptUpdate | TranscriptMessage[] | Full transcript after any change. |
| onSessionResponse | SessionResponse | Raw session response (for logging/context). |
| onError | CallError | Connection failure or missing session. |
| onClose | — | User clicked the × close button in the ControlBar. |
| onChat | — | User clicked the chat icon in the ControlBar. No panel is rendered — handle this to show your own chat UI. |
Styling
| Prop | Type | Description |
|------|------|-------------|
| className | string | CSS class on the wrapper <div>. |
| style | React.CSSProperties | Inline styles on the wrapper <div>. |
Feature Flags
interface CallFeatures {
renderControls?: boolean; // render built-in UI (default: false)
mute?: boolean; // show mute button (default: true)
screenShare?: boolean; // show screen-share button (default: true)
chat?: boolean; // show chat icon button — fires onChat (default: true)
raiseHand?: boolean; // reserved
noiseCancellation?: boolean; // reserved
}When renderControls: true the SDK renders:
| Phase | Built-in UI |
|-------|------------|
| idle | "Start Session" button |
| connecting | "Connecting…" banner |
| in_call | ControlBar (timer, participants, mute, screen share, chat icon, more menu, close) |
| error | Error message + "Try Again" button |
Set renderControls: false (the default) to build your own UI using the ref or hook.
The metadata Prop
A free-form Record<string, unknown> for extra context. The SDK uses one field internally:
| Field | Used by |
|-------|---------|
| participantName | ControlBar participants popup — shown as your display name |
Everything else is available to you via callbacks (e.g. onSessionResponse) for your own logic.
metadata={{ participantName: 'Alice', userId: 42 }}Ref / Imperative API
DrytisCall forwards a ref of type DrytisCallHandle:
import { useRef } from 'react';
import { DrytisCall } from '@drytis/meeting-sdk';
import type { DrytisCallHandle } from '@drytis/meeting-sdk';
const ref = useRef<DrytisCallHandle>(null);
<DrytisCall ref={ref} session={session} ... />
// call from anywhere
ref.current?.requestSupport(); // connect to the session
ref.current?.mute();
ref.current?.endCall();| Method | Description |
|--------|-------------|
| requestSupport | Connects to the session. Call this once session is ready. |
| cancelRequest | Resets phase to idle. Use this if you want to abort before connecting. |
| mute | Disables local microphone. |
| unmute | Re-enables local microphone. |
| endCall | Disconnects from the room, removes audio elements, resets to idle. |
| sendMessage | Publishes a chat message over the LiveKit data channel. No-op if not connected. |
| getTranscript | Returns a synchronous snapshot of the full transcript. |
Call Phase
idle ──→ connecting ──→ in_call ──→ idle
idle ──→ error| Phase | Meaning |
|-------|---------|
| idle | No connection. Waiting for requestSupport() to be called. |
| connecting | LiveKit room is being joined. |
| in_call | Room connected. Audio is active. |
| error | Connection failed. errorMessage has the reason. |
Queue management (waiting for an engineer, SSE, polling) is handled in your application code — not inside the SDK. See the Lifeguard example.
Events / Callbacks
All callbacks are optional. Pass them as props to <DrytisCall> or as options to useDrytisCall.
<DrytisCall
session={session}
onCallStarted={...}
onCallEnded={...}
onParticipantJoined={...}
onParticipantLeft={...}
onUserSpeaking={...}
onAIResponse={...}
onTranscriptUpdate={...}
onSessionResponse={...}
onError={...}
/>onCallStarted
onCallStarted?: (data: CallStartedData) => void
interface CallStartedData {
roomName: string; // the LiveKit room name that connected
}When: LiveKit room is connected and the local microphone is active. Audio from remote participants will begin playing immediately after this fires.
Use for: starting a call timer, logging analytics, showing an "in call" indicator.
onCallStarted={({ roomName }) => {
console.log('Call started in room:', roomName);
analytics.track('call_started', { roomName });
}}onCallEnded
onCallEnded?: (data: CallEndedData) => void
interface CallEndedData {
duration: number; // call length in seconds (0 if ended before connecting)
}When: Room disconnects — either because endCall() was called, the remote side disconnected, or the network dropped.
Use for: showing a post-call summary, logging duration, resetting UI state.
onCallEnded={({ duration }) => {
console.log(`Call lasted ${duration} seconds`);
setCallActive(false);
}}onParticipantJoined
onParticipantJoined?: (participant: ParticipantInfo) => void
interface ParticipantInfo {
identity: string; // LiveKit participant identity (unique, stable)
name: string; // display name (falls back to identity if not set)
}When: Any remote participant connects to the room — including AI agents and human engineers. Fires once per participant.
Use for: showing a "participant joined" notification, updating a participant list, detecting when an agent joins.
onParticipantJoined={(p) => {
console.log(`${p.name} (${p.identity}) joined`);
setParticipants(prev => [...prev, p]);
}}onParticipantLeft
onParticipantLeft?: (participant: ParticipantInfo) => void
// ParticipantInfo: same as aboveWhen: A remote participant disconnects from the room.
Use for: removing someone from a participant list, ending the session if the agent leaves.
onParticipantLeft={(p) => {
console.log(`${p.name} left`);
setParticipants(prev => prev.filter(x => x.identity !== p.identity));
}}onUserSpeaking
onUserSpeaking?: (data: UserSpeakingData) => void
interface UserSpeakingData {
identity: string; // LiveKit identity of the speaker
name: string; // display name of the speaker
transcript: string; // the text they just spoke (may be interim)
}When: A new user transcript segment arrives for the first time (not on subsequent updates to the same segment). Fires once per new spoken utterance — not once per update.
Use for: "someone is speaking" indicators, real-time activity feeds, keyword detection.
onUserSpeaking={({ name, transcript }) => {
console.log(`${name} said: "${transcript}"`);
setSpeakerIndicator(name);
}}onAIResponse
onAIResponse?: (data: AIResponseData) => void
interface AIResponseData {
text: string; // the agent's message text (may be interim)
}When: The AI agent publishes a transcript segment. Fires on every update — including interim (still being generated) messages.
Use for: streaming AI responses into a chat UI, logging AI output, triggering side effects based on agent speech.
onAIResponse={({ text }) => {
setAgentMessage(text); // update in real time as agent speaks
}}onTranscriptUpdate
onTranscriptUpdate?: (messages: TranscriptMessage[]) => void
interface TranscriptMessage {
id: string; // stable segment ID — same ID can arrive multiple times with updated text
sender: 'agent' | 'user' | 'system' | 'engineer'; // who sent this message
text: string; // current text (may change on interim updates)
timestamp: number; // ms since epoch
isFinal: boolean; // false = still being spoken; true = complete
identity?: string; // LiveKit participant identity of the sender
}When: Any change to the transcript — new message, interim text update, or final segment. Fires with the complete, deduplicated list every time.
Use for: rendering a live transcript UI, persisting the full conversation, displaying captions.
onTranscriptUpdate={(messages) => {
// messages is always the full list — render all of them
setTranscript(messages);
}}
// Rendering example
transcript.map(m => (
<div key={m.id} style={{ opacity: m.isFinal ? 1 : 0.5 }}>
<strong>{m.sender === 'agent' ? 'AI' : 'You'}:</strong> {m.text}
</div>
))Sender values:
| Value | Who |
|-------|-----|
| 'user' | The local user (you) |
| 'agent' | The AI agent in the room |
| 'engineer' | A human Lifeguard engineer |
| 'system' | System/platform messages |
onSessionResponse
onSessionResponse?: (response: SessionResponse) => void
interface SessionResponse {
roomName: string;
participantName?: string;
participantIdentity?: string;
projectId?: string | number;
userId?: string | number;
agentName?: string;
[key: string]: any; // any extra fields your backend returns
}When: The session has been resolved. Carries the raw server response for logging or display.
Use for: debugging, logging session context, showing room info in a developer panel.
onSessionResponse={(res) => {
console.log('Session:', res.roomName, 'Agent:', res.agentName);
}}onError
onError?: (error: CallError) => void
interface CallError {
code: string; // machine-readable error code
message: string; // human-readable description
}When: Something goes wrong. Phase moves to 'error' at the same time.
| code | Cause |
|--------|-------|
| CONNECTION_FAILED | room.connect() threw — bad token, server unreachable, network error. |
| CONFIG_ERROR | requestSupport() was called but session prop is null or missing. |
Use for: showing error toasts, logging to Sentry/Datadog, retrying with exponential backoff.
onError={({ code, message }) => {
console.error(`[${code}] ${message}`);
Sentry.captureException(new Error(message), { extra: { code } });
toast.error('Could not connect to session. Please try again.');
}}All events at a glance
| Event | Payload type | Fires when |
|-------|-------------|------------|
| onCallStarted | CallStartedData | Room connected, mic active |
| onCallEnded | CallEndedData | Room disconnected |
| onParticipantJoined | ParticipantInfo | Remote participant connects |
| onParticipantLeft | ParticipantInfo | Remote participant disconnects |
| onUserSpeaking | UserSpeakingData | New user speech segment arrives |
| onAIResponse | AIResponseData | AI agent sends any message |
| onTranscriptUpdate | TranscriptMessage[] | Any transcript change (full list) |
| onSessionResponse | SessionResponse | Session resolved |
| onError | CallError | Connection or config failure |
| onClose | — | × button clicked in ControlBar |
| onChat | — | Chat icon clicked in ControlBar |
useDrytisCall Hook
For complete control over rendering, use the hook directly:
import { useDrytisCall } from '@drytis/meeting-sdk';
function MyMeeting({ session }: { session: MeetingSession | null }) {
const {
phase,
isMuted,
isCameraOn,
transcript,
participants,
errorMessage,
requestSupport,
mute,
unmute,
toggleCamera,
endCall,
} = useDrytisCall({
session,
publicKey: process.env.NEXT_PUBLIC_DRYTIS_PUBLIC_KEY,
metadata: { participantName: 'Alice' },
onCallStarted: ({ roomName }) => console.log('started', roomName),
onTranscriptUpdate: (msgs) => console.log(msgs),
});
// Connect once session is ready
useEffect(() => {
if (session && phase === 'idle') requestSupport();
}, [session]);
if (phase === 'idle' || phase === 'connecting') return <p>{phase}…</p>;
if (phase === 'error') return <p>Error: {errorMessage}</p>;
return (
<div>
<ul>
{transcript.map(m => (
<li key={m.id} style={{ opacity: m.isFinal ? 1 : 0.5 }}>
<strong>{m.sender}:</strong> {m.text}
</li>
))}
</ul>
<button onClick={isMuted ? unmute : mute}>{isMuted ? 'Unmute' : 'Mute'}</button>
<button onClick={toggleCamera}>{isCameraOn ? 'Camera Off' : 'Camera On'}</button>
<button onClick={endCall}>End</button>
</div>
);
}Options (UseDrytisCallOptions)
Same fields as DrytisCallProps minus features, className, style, and autoConnect.
Return (UseDrytisCallReturn)
| Field | Type | Description |
|-------|------|-------------|
| phase | CallPhase | Current state. |
| isMuted | boolean | Local mic muted. |
| isCameraOn | boolean | Local camera on. |
| transcript | TranscriptMessage[] | Live transcript. |
| participants | ParticipantInfo[] | Current remote participants. |
| errorMessage | string \| null | Set when phase === 'error'. |
| session | MeetingSession \| null | Active session. |
| requestSupport / startCall | () => void | Connect to the session. |
| cancelRequest | () => void | Reset to idle. |
| mute / unmute | () => void | Mic controls. |
| toggleCamera | () => void | Camera toggle. |
| isScreenSharing | boolean | Whether local screen share is active. |
| toggleScreenShare | () => void | Start / stop screen share. |
| endCall | () => void | Disconnect and reset. |
| sendMessage | (text: string) => void | Send a message over the LiveKit data channel. |
| getTranscript | () => TranscriptMessage[] | Sync transcript snapshot. |
ControlBar Component
The built-in control bar. You can use it standalone with useDrytisCall:
import { ControlBar } from '@drytis/meeting-sdk';
<ControlBar
isMuted={isMuted}
isCameraOn={isCameraOn}
isScreenSharing={isScreenSharing}
onMute={mute}
onUnmute={unmute}
onToggleCamera={toggleCamera}
onToggleScreenShare={toggleScreenShare}
onEndCall={endCall}
onClose={() => setVisible(false)}
onChat={() => setChatOpen(true)}
features={{ mute: true, screenShare: true, chat: true }}
participantCount={participants.length + 1}
participants={participants}
localParticipantName="Alice"
/>| Prop | Type | Description |
|------|------|-------------|
| isMuted | boolean | Mute button state. |
| isCameraOn | boolean | Camera button state. |
| isScreenSharing | boolean | Screen share button active state. |
| onMute / onUnmute | () => void | Called on mute button press. |
| onToggleCamera | () => void | Called on camera button press. |
| onToggleScreenShare | () => void | Called on screen share button press. |
| onEndCall | () => void | Called on end-call button press. |
| onClose | () => void | Optional. Renders a × button that calls this when clicked. |
| onChat | () => void | Optional. Renders a chat icon that fires this event — no panel is shown by the SDK. |
| features | CallFeatures | Set individual flags to false to hide buttons. |
| participantCount | number | Badge count (include yourself). |
| participants | ParticipantInfo[] | Remote participants in the popup. |
| localParticipantName | string | Your name in the participants popup. |
The bar renders: mute · screen share · timer · participants · chat icon · more menu · close ×
Low-level API Utilities
Exported for app-level use — session fetching, queue management, and SSE event parsing. The SDK itself does not call these internally.
import {
requestSession,
cancelQueueRequest,
pollQueueStatus,
resolveSessionFromEvent,
} from '@drytis/meeting-sdk';
import type { ApiCredentials, SessionResult } from '@drytis/meeting-sdk';requestSession
requestSession(
serverUrl: string,
sessionEndpoint: string,
credentials: ApiCredentials, // { publicKey: string }
body?: Record<string, unknown>,
metadata?: Record<string, unknown>,
): Promise<SessionResult>Calls the session endpoint and returns:
interface SessionResult {
session: MeetingSession | null; // LiveKit token was returned immediately
queueId: string | null; // request was queued
subscriptionUrl: string | null; // SSE URL for queue events
subscriptionToken: string | null; // JWT for SSE auth
rawResponse: SessionResponse | null; // full raw server response
}cancelQueueRequest
cancelQueueRequest(serverUrl: string, credentials: ApiCredentials, queueId: string, userId?: string | number): Promise<void>pollQueueStatus
pollQueueStatus(serverUrl: string, queueId: string): Promise<Record<string, any>>resolveSessionFromEvent
resolveSessionFromEvent(eventData: any, fallbackServerUrl: string): MeetingSession | nullExtracts a MeetingSession from a Mercure/SSE acceptance event payload.
TypeScript Types
import type {
DrytisCallProps,
DrytisCallHandle,
CallHandle,
UseDrytisCallOptions,
UseDrytisCallReturn,
CallPhase, // 'idle' | 'connecting' | 'in_call' | 'error'
EmbedMode, // 'embedded' | 'redirect'
CallFeatures,
MeetingSession,
SessionResponse,
TranscriptMessage,
ParticipantInfo,
CallStartedData,
CallEndedData,
UserSpeakingData,
AIResponseData,
CallError,
ApiCredentials,
SessionResult,
} from '@drytis/meeting-sdk';Full Examples
Planning — AI agent session
'use client';
import { useEffect, useRef, useState } from 'react';
import { DrytisCall } from '@drytis/meeting-sdk';
import type { DrytisCallHandle, MeetingSession } from '@drytis/meeting-sdk';
const PUBLIC_KEY = process.env.NEXT_PUBLIC_DRYTIS_PUBLIC_KEY!;
export function PlanningPanel({ userToken, userId, userName, projectId }: {
userToken: string; userId: string; userName: string; projectId?: number;
}) {
const ref = useRef<DrytisCallHandle>(null);
const [session, setSession] = useState<MeetingSession | null>(null);
const hasJoined = useRef(false);
// Auto-connect once session is ready
useEffect(() => {
if (!session || hasJoined.current) return;
hasJoined.current = true;
setTimeout(() => ref.current?.requestSupport(), 0);
}, [session]);
// Fetch session from your backend
useEffect(() => {
const controller = new AbortController();
fetch('/api/fde/planning-session', {
method: 'POST',
signal: controller.signal,
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${userToken}` },
body: JSON.stringify({ participantName: userName, userId, projectId }),
})
.then(r => r.json())
.then(data => {
const lkToken = data.livekit_token ?? data.token;
const roomName = data.roomName ?? data.room_name;
const serverUrl = data.serverUrl ?? data.livekit_server_url ?? '';
if (lkToken && roomName) setSession({ roomName, token: lkToken, serverUrl });
})
.catch(() => {});
return () => controller.abort();
}, [userToken, userId]);
if (!session) return <p>Fetching session…</p>;
return (
<DrytisCall
ref={ref}
session={session}
publicKey={PUBLIC_KEY}
metadata={{ participantName: userName }}
features={{ renderControls: true, mute: true }}
onCallStarted={(d) => console.log('started', d.roomName)}
onCallEnded={(d) => console.log('ended after', d.duration, 's')}
onTranscriptUpdate={(msgs) => console.log(msgs.length, 'messages')}
/>
);
}Lifeguard — human engineer queue
Your app manages the queue flow. The SDK only connects once a session is available.
'use client';
import { useEffect, useRef, useState } from 'react';
import { DrytisCall, pollQueueStatus, resolveSessionFromEvent } from '@drytis/meeting-sdk';
import type { DrytisCallHandle, MeetingSession } from '@drytis/meeting-sdk';
const SERVER = process.env.NEXT_PUBLIC_DRYTIS_MEETING_SERVER!;
const PUBLIC_KEY = process.env.NEXT_PUBLIC_DRYTIS_PUBLIC_KEY!;
export function LifeguardPanel({ userToken, userId, userName, projectId }: {
userToken: string; userId: string; userName: string; projectId?: number;
}) {
const ref = useRef<DrytisCallHandle>(null);
const [session, setSession] = useState<MeetingSession | null>(null);
const [waiting, setWaiting] = useState(false);
const hasJoined = useRef(false);
const pollRef = useRef<ReturnType<typeof setInterval> | null>(null);
const esRef = useRef<EventSource | null>(null);
const sessionId = useRef(crypto.randomUUID()).current;
// Auto-connect once accepted
useEffect(() => {
if (!session || hasJoined.current) return;
hasJoined.current = true;
setTimeout(() => ref.current?.requestSupport(), 0);
}, [session]);
const stopListeners = () => {
esRef.current?.close(); esRef.current = null;
if (pollRef.current) { clearInterval(pollRef.current); pollRef.current = null; }
};
const handleAccepted = (eventData: unknown) => {
stopListeners();
const resolved = resolveSessionFromEvent(eventData, SERVER);
if (resolved) { setWaiting(false); setSession(resolved); }
};
const requestHelp = async () => {
setWaiting(true);
try {
const res = await fetch('/api/help-request', {
method: 'POST',
headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${userToken}` },
body: JSON.stringify({ userId, projectId, sessionId, participantName: userName }),
});
const data = await res.json();
// Queue was accepted immediately
const lkToken = data.queue?.livekit_token ?? data.livekit_token;
const roomName = data.room_name ?? data.roomName;
if (lkToken && roomName) {
setWaiting(false);
setSession({ roomName, token: lkToken, serverUrl: data.serverUrl ?? SERVER });
return;
}
const queueId: string | undefined = data.queue?.id?.toString();
// SSE subscription (primary)
const subUrl = data.subscription?.subscription_url;
const subToken = data.subscription?.jwt_token;
if (subUrl && subToken) {
const url = new URL(subUrl);
url.searchParams.set('authorization', subToken);
const es = new EventSource(url.toString());
esRef.current = es;
es.onmessage = (e) => {
try {
const ev = JSON.parse(e.data);
const type: string = ev.event ?? ev.type ?? '';
if (type === 'lifeguard.accepted' || type === 'fde.accepted') handleAccepted(ev);
else if (type === 'lifeguard.cancelled' || type === 'fde.cancelled') { stopListeners(); setWaiting(false); }
} catch { /* ignore */ }
};
es.onerror = () => { esRef.current?.close(); esRef.current = null; };
}
// Polling fallback every 5 s
if (queueId) {
pollRef.current = setInterval(async () => {
try {
const status = await pollQueueStatus(SERVER, queueId);
if (status.status === 'accepted') handleAccepted(status);
else if (status.status === 'cancelled') { stopListeners(); setWaiting(false); }
} catch { /* ignore */ }
}, 5000);
}
} catch (err) {
console.error('help request failed', err);
setWaiting(false);
}
};
const cancelHelp = () => {
stopListeners();
setWaiting(false);
hasJoined.current = false;
setSession(null);
};
return (
<div>
{!session && !waiting && (
<button onClick={() => void requestHelp()}>Get Help</button>
)}
{waiting && (
<>
<p>Waiting for an engineer…</p>
<button onClick={cancelHelp}>Cancel</button>
</>
)}
<DrytisCall
ref={ref}
session={session}
publicKey={PUBLIC_KEY}
metadata={{ participantName: userName }}
features={{ renderControls: true, mute: true }}
onCallStarted={() => console.log('engineer joined')}
onCallEnded={() => { hasJoined.current = false; setSession(null); }}
/>
</div>
);
}Redirect mode
<DrytisCall
session={session}
publicKey={PUBLIC_KEY}
mode="redirect"
autoConnect
/>SDK navigates the browser to {session.serverUrl}/meeting/{session.roomName} once connected.
Full custom UI with the hook
import { useDrytisCall, ControlBar } from '@drytis/meeting-sdk';
function CustomUI({ session }: { session: MeetingSession | null }) {
const {
phase, isMuted, isCameraOn, transcript, participants,
requestSupport, mute, unmute, toggleCamera, endCall,
} = useDrytisCall({ session, publicKey: process.env.NEXT_PUBLIC_DRYTIS_PUBLIC_KEY });
useEffect(() => {
if (session && phase === 'idle') requestSupport();
}, [session]);
if (phase !== 'in_call') return <p>{phase}</p>;
return (
<div>
<ul>
{transcript.map(m => <li key={m.id}><b>{m.sender}:</b> {m.text}</li>)}
</ul>
<ControlBar
isMuted={isMuted} isCameraOn={isCameraOn}
onMute={mute} onUnmute={unmute}
onToggleCamera={toggleCamera} onEndCall={endCall}
features={{ mute: true }}
participantCount={participants.length + 1}
participants={participants}
/>
</div>
);
}How It Works Internally
Connect — When
requestSupport()is called with a validsession, the SDK creates a LiveKitRoomand callsroom.connect(session.serverUrl, session.token).Remote audio — On every
TrackSubscribedevent for audio tracks, the SDK callstrack.attach()to create an<audio>element, appends it todocument.bodywithdisplay: none, and plays it. This is what makes the other side audible without a video element.Transcript — Two sources are merged into one array:
TranscriptionReceived— LiveKit's native real-time transcription (interim + final segments, deduplicated by segment ID).DataReceived— JSON packets published over the reliable data channel withtype: "transcript"ortype: "chat".
Cleanup — On unmount or
endCall():room.disconnect()is called, all<audio>elements are removed from the DOM.
License
MIT
