@simfinity/constellation-ui
v1.1.16
Published
React bindings for Simfinity Constellation Client — persistent real-time LLM chat rooms (text + audio).
Readme
@simfinity/constellation-ui
React bindings for Simfinity Constellation Client — persistent real-time LLM chat rooms (text + audio).
Overview
Constellation provides:
- An abstraction layer to interact with different LLMs
- Persistent server-side sessions
- WebSocket real-time streaming
- Text + audio conversations
- Configurable system instructions & session settings
- Reconnectable chat rooms
This package is a React wrapper around:
@simfinity/constellation-client
It provides context + hooks for lifecycle management.
Installation
npm install @simfinity/constellation-ui
# or
yarn add @simfinity/constellation-ui
# Dependency:
npm install @simfinity/constellation-client
# or
yarn add @simfinity/constellation-clientArchitecture
Session lifecycle:
startSession() → REST call
joinSession() → WebSocket connection
configureSession() (optional) → WebSocket messages
sendText() / sendAudioChunk() + commitAudio() → WebSocket messages
endSession() → REST call⚠️ A session MUST be started before it can be joined.
⚠️ A session SHOULD be ended when finished.
Minimal Working Example (Text-Only)
import React, { useEffect } from "react";
import WebClient from "@simfinity/constellation-client";
import {
ConstellationProvider,
useConstellationClient
} from "@simfinity/constellation-ui";
const client = new WebClient({
sessionEndpoint: "https://your-api",
streamingEndpoint: "wss://your-stream",
key: "YOUR_SECRET_KEY",
});
function Chat() {
const client = useConstellationClient();
useEffect(() => {
async function init() {
const params: SessionStartParameters = {
llmProvider: "openai",
voiceEnabled: false,
behaviour: {
temperature: 0.9,
instructions: "Just have a nice and casual conversation.",
}
}
await client.startSession(params);
await client.joinSession(false, {
onStreamClosed: console.log,
onTranscriptResponse: (msg) => {
console.log("Model:", msg);
}
});
client.sendText("Hello!");
}
init();
return () => {
client.endSession();
};
}, []);
return <div>Chat running...</div>;
}
export default function App() {
return (
<ConstellationProvider client={client}>
<Chat />
</ConstellationProvider>
);
}Audio Mode
To enable audio:
// Create a audio-enabled session
const params: SessionStartParameters = {
llmProvider: "openai",
voiceEnabled: true,
voiceName: "alloy",
behaviour: {
temperature: 0.9,
instructions: "Just have a nice and casual conversation.",
}
}
await startSession(params);
// Join a stream subscribing to audio events
await joinSession(true, {
onStreamClosed: console.log,
onAudioResponseStart: () => console.log("Speaking..."),
onAudioResponseChunk: (chunk) => audioPlayer.enqueue(chunk),
onAudioResponseEnd: () => console.log("Done")
});Audio requirements:
Input:
- Format: PCM, 16k Hertz
- Encoding: Base64
- Transcription: handled server-side
Send audio:
sendAudioChunk(base64PcmChunk);
commitAudio();commitAudio() is MANDATORY: server-side VAD (voice activation detection) is not provided.
⚠️ Client should always implement audio-input noise detection and explicit commits:
- Current constellation version does not provide it server-side
- Avoids continuously streaming audio data which reduces network and token consumption
- Allows to provide a more responsive experience as VAD introduces a constant delay and is potentially less stable
Output:
Audio responses are:
- Format: PCM, 24k Hertz
- Encoding: Base64
Text and Transcript
Text is always enabled in a session. However, the client must provide the appropriate handlers to receive events:
interface EventHandlers {
// ...
onTranscriptInput?: (transcript: string) => void;
onTranscriptResponse?: (transcript: string) => void;
}⚠️ These events serve both the text exchanges AND transcript of audio exchanges:
// Pseudo-code:
// Text:
constellationClient.sendText("Hello");
// Triggers:
// 1) onTranscriptInput(transcript) -> transcript is "Hello"
// 2) onTranscriptResponse(transcript) -> transcript is the response from the LLM
// Audio:
constellationClient.sendAudioChunk("... PCM16 audio data for 'Hello'...");
constellationClient.commitAudio();
// Triggers:
// 1) onTranscriptInput(transcript) -> transcript is "Hello"
// 2) onTranscriptResponse(transcript) -> transcript is the response from the LLMPartial text events are provided as well through onTranscriptInputPart and onTranscriptResponsePart events.
Partial text events are fired when a new piece of text is available, they allow for a more reactive/realtime, temporary display of
incoming text, but cannot be trusted to build the final message: always rely on the final "non-part" event as the final source of truth.
Session Configuration
System behavior can be updated dynamically, mid-session:
configureSession({
temperature: 0.2,
instructions: "You are a helpful coding assistant.",
maxResponseToken: 500
});This does NOT trigger a response.
Event Handlers
The event handlers are client-provided hooks to receive all the server events discussed above Provided at join session time.
joinSession() requires at least:
{
onStreamClosed: (reason: string) => void
}Optional handlers:
- onSessionConfigured: acknowledgment of configureSession call
- onAudioResponseChunk: audio data chunk of a model audio response
- onAudioResponseEnd: model has finished streaming an audio response
- onResponseEnd: model has finished generating and streaming a response
- onTranscriptInput: the echo of a user text input or transcript of a user audio input
- onTranscriptInputPart: the echo of a user text input or a piece of transcript of a user audio input
- onTranscriptResponse: model text response or transcript of model audio response
- onTranscriptResponsePart: piece of model text response or transcript of model audio response
- onAgentResponse: in a session agents configured, this is the event carry an agent-feedback
- onClientAction: in a session with client action defined, this event is fired when a named action is called by the model
- onTechnicalError
- onLatencyUpdate: fired regularly with the last computed latency between the client and constellation
If omitted, events are ignored silently & safely.
