geoiq-react-lk-vision-bot-client
v0.1.4
Published
## Installation
Readme
SDK for Vision Bot
Installation
Yarn
yarn add geoiq-react-lk-vision-bot-clientNPM
npm install geoiq-react-lk-vision-bot-client --saveCore Components
Authentication
POST, https://beapis-in.staging.geoiq.ai/vision/user/v1.0/login, Input:
{
"u_email" : "",
"u_password" : ""
}Output:
{
"message": "User verified successfully.",
"user": {
"access_token": "",
"jwt_access_expires_in_milli_secs": 86400000,
"u_ai_id": 442,
"u_created_on": "2025-05-13 12:46:19.233187",
"u_email": "",
"u_id": 1,
"u_name": "",
"u_status": 0,
"u_updated_on": "2025-05-13 12:46:19.233187"
}
}POST, https://beapis-in.staging.geoiq.ai/vision/user/v1.0/getsdkaccesstoken, Headers:- Authorization: "Bearer ${token}" Output:-
{
"accessToken": "...",
"identity": "tQuUTvhF08",
"room_name": "5HBS2HPTOO"
}LKRoom
The main container component that establishes a connection to a room and provides context to its children.
Props:
- serverUrl (string): The server URL
- token (string): Authentication token
- options (object, optional): Additional connection options
- connect (boolean, optional): Whether to connect automatically (default: true)
- onConnected (function, optional): Callback when connection is established
- onDisconnected (function, optional): Callback when disconnected
Example:
import { LKRoom } from "geoiq-react-lk-vision-bot-client";
function App() {
return (
<LKRoom
className="flex flex-col h-full w-full"
serverUrl={wsUrl}
token={token}
connect={shouldConnect}
onError={(e) => {
setToastMessage({ message: e.message, type: "error" });
console.error(e);
}}
>
{children}
<RoomAudioRenderer />
<StartAudio label="Click to enable audio playback" />
</LKRoom>
);
}RoomAudioRenderer
Automatically renders audio for all participants in the room.
Example:
import { LKRoom, RoomAudioRenderer } from "geoiq-react-lk-vision-bot-client";
function AudioChat() {
return (
<LKRoom serverUrl="wss://your-...-server.com" token="your-token">
<RoomAudioRenderer />
{/* Audio is now automatically played for all participants */}
</LKRoom>
);
}StartAudio
Button component to explicitly start audio playback, useful for browsers that require user interaction before playing audio.
Props:
- label (string, optional): Custom button text
- className (string, optional): CSS class for styling
import { LKRoom, StartAudio } from "geoiq-react-lk-vision-bot-client";
function AudioApp() {
return (
<LKRoom serverUrl="wss://your-...-server.com" token="your-token">
<StartAudio label="Click to Enable Audio" className="audio-button" />
</LKRoom>
);
}Hooks
useConnectionState
Returns the current connection state of the room.
Returns: ConnectionState enum value
Example:
type of connectionState is export enum ConnectionState { Disconnected = 'disconnected', Connecting = 'connecting', Connected = 'connected', Reconnecting = 'reconnecting', SignalReconnecting = 'signalReconnecting', }
import {
LKRoom,
useConnectionState,
ConnectionState,
} from "geoiq-react-lk-vision-bot-client";
function ConnectionIndicator() {
const connectionState = useConnectionState();
let statusMessage = "Disconnected";
let statusColor = "red";
switch (connectionState) {
case ConnectionState.Connected:
statusMessage = "Connected";
statusColor = "green";
break;
case ConnectionState.Connecting:
statusMessage = "Connecting...";
statusColor = "orange";
break;
// Handle other states
}
return <div style={{ color: statusColor }}>Status: {statusMessage}</div>;
}useLocalParticipant
Provides access to the local participant object and convenience methods.
Returns:
- participant: LocalParticipant object
- isMuted: Boolean indicating if audio is muted
- isCameraEnabled: Boolean indicating if camera is on
- isScreenShareEnabled: Boolean indicating if screen sharing is active
- cameraPublication: Local camera track publication
- microphonePublication: Local microphone track publication
- screenSharePublication: Local screen share track publication
- setCameraEnabled: to request for camera access
- setMicrophoneEnabled Enabled: to request for microphone access
Example:
import { LKRoom, useLocalParticipant } from "geoiq-react-lk-vision-bot-client";
function LocalParticipantControls() {
const {
participant,
isMuted,
isCameraEnabled,
setCameraEnabled,
setMicrophoneEnabled,
} = useLocalParticipant();
const connectionState = useConnectionState();
useEffect(() => {
if (roomState === ConnectionState.Connected) {
localParticipant.setCameraEnabled(true);
localParticipant.setMicrophoneEnabled(true);
}
}, [localParticipant, roomState]);
return (
<div>
<h3>Local User: {participant.identity}</h3>
<div>Microphone: {isMuted ? "Muted" : "Active"}</div>
<div>Camera: {isCameraEnabled ? "On" : "Off"}</div>
<button onClick={() => participant.setMicrophoneEnabled(!isMuted)}>
Toggle Mic
</button>
</div>
);
}useTracks
Returns track publications based on provided filters.
Parameters:
- options (object):
- sources (array, optional): Array of Track.Source values to filter
- onlySubscribed (boolean, optional): Only return subscribed tracks
- participantIdentity (string, optional): Only return tracks from this participant
- Returns: Array of track publications matching the filter
Example:
import { LKRoom, useTracks, Track } from "geoiq-react-lk-vision-bot-client";
function VideoGrid() {
const localVideoTrack = useMemo(() => {
return tracks.find(
(trackRef) =>
trackRef.participant instanceof LocalParticipant &&
trackRef.source === Track.Source.Camera
);
}, [tracks]);
const trackToggle = useTrackToggle(localVideoTrack as TrackPublication);
return (
<VideoTrack
className="rounded-xl object-contain w-full h-64"
trackRef={localVideoTrack}
/>
);
}useTrackToggle
Provides functions to toggle local tracks (audio/video/screenshare).
Parameters:
- publication: The TrackPublication to toggle
- initialState (optional): Initial enabled state
Returns:
- enabled: Boolean indicating if track is enabled
- toggle: Function to toggle track state
- enable: Function to enable track
- disable: Function to disable track
Example:
import {
LKRoom,
useLocalParticipant,
useTrackToggle,
} from "geoiq-react-lk-vision-bot-client";
function MediaControls() {
const { microphonePublication, cameraPublication } = useLocalParticipant();
const { enabled: micEnabled, toggle: toggleMic } = useTrackToggle(
microphonePublication
);
const { enabled: cameraEnabled, toggle: toggleCamera } =
useTrackToggle(cameraPublication);
return (
<div className="controls">
<button onClick={toggleMic}>{micEnabled ? "Mute" : "Unmute"}</button>
<button onClick={toggleCamera}>
{cameraEnabled ? "Turn Off Camera" : "Turn On Camera"}
</button>
</div>
);
}useVoiceAssistant
Manages voice assistant features and state.
Returns:
- speaking: Boolean indicating if the assistant is currently speaking
- muted: Boolean indicating if the assistant is muted
- toggleMute: Function to toggle mute state
- setMuted: Function to set mute state
Example:
import { LKRoom, useVoiceAssistant } from "geoiq-react-lk-vision-bot-client";
function AssistantControls() {
const { speaking, muted, toggleMute } = useVoiceAssistant();
return (
<div className="assistant-panel">
<div className="status">
{speaking && (
<div className="speaking-indicator">Assistant is speaking...</div>
)}
</div>
<button onClick={toggleMute} className={muted ? "muted" : "active"}>
{muted ? "Enable Assistant" : "Disable Assistant"}
</button>
{/* Visual indicator for speaking state */}
<div className={`indicator ${speaking ? "active" : ""}`}></div>
</div>
);
}useRoomContext
Access the room context for advanced usage.
Returns: The room context object
Example:
import { LKRoom, useRoomContext } from "geoiq-react-lk-vision-bot-client";
function RoomInfo() {
const context = useRoomContext();
const room = context.room;
return (
<div>
<h3>Room Information</h3>
<div>Room Name: {room?.name}</div>
<div>Participant Count: {room?.participants.size}</div>
<button onClick={() => room?.disconnect()}>Exit Room</button>
</div>
);
}useIsSpeaking
Parameters: participant : Participant(Optional)
Returns: boolean;
const isSpeaking = useIsSpeaking(participant);useConnectionQualityIndicator
export enum ConnectionQuality {
Excellent = 'excellent',
Good = 'good',
Poor = 'poor',
Lost = 'lost',
Unknown = 'unknown',
}
const { localParticipant } = useLocalParticipant();
const {quality} = useConnectionQualityIndicator({
participant: localParticipant,
});useDataChannels
You can listen for incoming text messages on a specific "topic"
Example:
const onDataReceived = useCallback(
(msg: any) => {
console.log(msg, "msg");
if (msg.topic === "transcription") {
const decoded = JSON.parse(
new TextDecoder("utf-8").decode(msg.payload)
);
let timestamp = new Date().getTime();
if ("timestamp" in decoded && decoded.timestamp > 0) {
timestamp = decoded.timestamp;
}
setTranscripts([
...transcripts,
{
name: "You",
message: decoded.text,
timestamp: timestamp,
isSelf: true,
},
]);
} else {
const decoded = JSON.parse(
new TextDecoder("utf-8").decode(msg.payload)
);
console.log(decoded, "decoded");
}
},
[transcripts]
);
useDataChannel(onDataReceived);Publishing Data to Agent
You can also send data to a specific topic.
Example:
import { useDataChannel } from "geoiq-react-lk-vision-bot-client";
import { useCallback } from "react";
function MyComponent() {
// Specify the topic when calling useDataChannel for sending
const { send } = useDataChannel("topic-from-client-to-agent");
// Function to send data to the agent
const sendDataToAgent = useCallback(
(data: Record<string, unknown>) => {
if (send) {
try {
const message = JSON.stringify(data);
const payload = new TextEncoder().encode(message);
// Send data on the pre-defined topic
// The topic can also be specified here if not done in the hook, or to override it
send(payload, { topic: "topic-from-client-to-agent" });
console.log("Sent data to agent (topic-from-client-to-agent):", data);
} catch (error) {
console.error("Error encoding or sending data to agent:", error);
}
} else {
console.warn(
"useDataChannel send function is not available. Cannot send data to agent."
);
}
},
[send] // Dependency: send function from useDataChannel
);
sendDataToAgent({
type: "acknowledgement",
status: "recommended_products_processed",
details: {
query: queryString,
count: recommendedProduct_ids.length,
clientTimestamp: new Date().toISOString(),
},
});
}Types Reference
ConnectionState
Enum representing possible connection states for a room:
- ConnectionState.Disconnected: Not connected to the room
- ConnectionState.Connecting: Connection is in progress
- ConnectionState.Connected: Successfully connected to the room
- ConnectionState.Reconnecting: Attempting to reconnect after a disconnection
- ConnectionState.Disconnecting: In the process of disconnecting
LocalParticipant
Extends the Participant type with additional methods for managing local media:
- enableCameraAndMicrophone(): Enable both camera and microphone
- disableCameraAndMicrophone(): Disable both camera and microphone
- setMicrophoneEnabled(enabled: boolean): Toggle microphone
- setCameraEnabled(enabled: boolean): Toggle camera
- setScreenShareEnabled(enabled: boolean): Toggle screen sharing
Track
Constants and types related to media tracks:
Source Types:
- Track.Source.Camera: Video from a camera
- Track.Source.Microphone: Audio from a microphone
- Track.Source.ScreenShare: Screen sharing video
- Track.Source.ScreenShareAudio: Audio from screen sharing
- Track.Source.Unknown: Unknown track source
TranscriptionSegment
Object representing a segment of transcription:
- participant: The participant who spoke
- text: The transcribed text
- timestamp: When the segment was created
- isFinal: Whether this is the final version of this segment
Advanced Examples
Complete Room with Controls and Transcription
import React, { useState } from "react";
import {
LKRoom,
RoomAudioRenderer,
StartAudio,
useConnectionState,
useLocalParticipant,
useTracks,
useTrackToggle,
useTrackTranscription,
Track,
ConnectionState,
} from "geoiq-react-lk-vision-bot-client";
function VisionBotApp() {
const [token, setToken] = useState("your-...-token");
const [url, setUrl] = useState("wss://your-...-server.com");
return (
<div className="app-container">
<h1>Vision Bot Client</h1>
<LKRoom
serverUrl={url}
token={token}
connect={shouldConnect}
onError={handleError}
className="flex flex-col h-full w-full"
>
<RoomAudioRenderer />
<StartAudio />
// Render your UI here
</LKRoom>
</div>
);
}
function ConnectionStatus() {
const connectionState = useConnectionState();
let statusMessage = "";
let statusClass = "";
switch (connectionState) {
case ConnectionState.Connected:
statusMessage = "Connected";
statusClass = "status-connected";
break;
case ConnectionState.Connecting:
statusMessage = "Connecting...";
statusClass = "status-connecting";
break;
case ConnectionState.Reconnecting:
statusMessage = "Reconnecting...";
statusClass = "status-reconnecting";
break;
default:
statusMessage = "Disconnected";
statusClass = "status-disconnected";
}
return (
<div className={`connection-status ${statusClass}`}>
Status: {statusMessage}
</div>
);
}
function ParticipantControls() {
const { participant, microphonePublication, cameraPublication } =
useLocalParticipant();
const { enabled: micEnabled, toggle: toggleMic } = useTrackToggle(
microphonePublication
);
const { enabled: cameraEnabled, toggle: toggleCamera } =
useTrackToggle(cameraPublication);
return (
<div className="controls-panel">
<h3>You: {participant.identity}</h3>
<div className="controls">
<button
onClick={toggleMic}
className={`control-button ${micEnabled ? "active" : "inactive"}`}
>
{micEnabled ? "Mute" : "Unmute"}
</button>
<button
onClick={toggleCamera}
className={`control-button ${cameraEnabled ? "active" : "inactive"}`}
>
{cameraEnabled ? "Turn Off Camera" : "Turn On Camera"}
</button>
</div>
</div>
);
}
function LiveTranscriptionPanel() {
const tracks = useTracks({
sources: [Track.Source.Microphone],
onlySubscribed: true,
});
// Get all microphone tracks for transcription
return (
<div className="transcription-container">
<h3>Live Transcription</h3>
{tracks.length === 0 ? (
<div className="no-tracks">No active speakers found</div>
) : (
tracks.map((track) => (
<TrackTranscription key={track.trackSid} track={track.publication} />
))
)}
</div>
);
}
function TrackTranscription({ track }) {
const { segments, clearTranscript } = useTrackTranscription(track);
return (
<div className="transcript-panel">
<div className="transcript-header">
<div>{track.participant.identity}</div>
<button onClick={clearTranscript} className="clear-button">
Clear
</button>
</div>
<div className="transcript-content">
{segments.map((segment, idx) => (
<div
key={idx}
className={`segment ${segment.isFinal ? "final" : "interim"}`}
>
{segment.text}
</div>
))}
</div>
</div>
);
}