@simfinity/constellation-react
v1.2.1
Published
React bindings for Simfinity Constellation Client — persistent real-time LLM chat rooms (text + audio).
Readme
@simfinity/constellation-react
React bindings for Simfinity Constellation Client — persistent real-time LLM chat rooms (text + audio).
Overview
Constellation provides:
- An abstraction layer to interact with different LLMs
- Persistent server-side sessions
- WebSocket real-time streaming
- Text + audio conversations
- Configurable system instructions & session settings
- Reconnectable chat rooms
This package is a React wrapper around:
@simfinity/constellation-client
It provides context + hooks for lifecycle management.
Installation
npm install @simfinity/constellation-react
# or
yarn add @simfinity/constellation-reactArchitecture
Session lifecycle:
startSession() → REST call
joinSession() → WebSocket connection
configureSession() (optional) → WebSocket messages
sendText() / sendAudioChunk() + commitAudio() → WebSocket messages
endSession() → REST call⚠️ A session MUST be started before it can be joined.
⚠️ A session SHOULD be ended when finished.
Minimal Working Example (Text-Only)
import React, { useEffect } from "react";
import {
ConstellationProvider,
useConstellationSession
} from "@simfinity/constellation-react";
function Chat() {
const constellationSession = useConstellationSession();
useEffect(() => {
async function init() {
const params: SessionStartParameters = {
llmProvider: "openai",
voiceEnabled: false,
behaviour: {
temperature: 0.9,
instructions: "Just have a nice and casual conversation.",
}
}
await constellationSession.startSession(params);
await constellationSession.joinSession(false, {
onStreamClosed: console.log,
onTranscriptResponse: (msg) => {
console.log("Model:", msg);
}
});
constellationSession.sendText("Hello!");
}
init();
return () => {
constellationSession.endSession();
};
}, []);
return <div>Chat running...</div>;
}
export default function App() {
return (
<ConstellationProvider config={{
sessionEndpoint: "https://your-api",
streamingEndpoint: "wss://your-stream",
key: "YOUR_SECRET_KEY"
}}>
<Chat />
</ConstellationProvider>
);
}Audio Mode
To enable audio:
// Create a audio-enabled session
const params: SessionStartParameters = {
llmProvider: "openai",
voiceEnabled: true,
interruptions: true,
voiceName: "alloy",
behaviour: {
temperature: 0.9,
instructions: "Just have a nice and casual conversation.",
}
}
await constellationSession.startSession(params);
// Join a stream subscribing to audio events
await constellationSession.joinSession(true, {
onStreamClosed: console.log,
});
// Use built-in audio capture and playback device
await constellationSession.audioDevice().in.start();
constellationSession.audioDevice().in.setMuted(true);Audio requirements:
Input:
- Format: PCM, 24k Hertz
- Encoding: Base64
- Transcription: handled server-side
Output:
Audio responses are:
- Format: PCM, 24k Hertz
- Encoding: Base64
Sending audio:
Handled internally by the constellationSession package.
Text and Transcript
Text is always enabled in a session. However, the client application must provide the appropriate handlers to receive events:
interface EventHandlers {
// ...
onTranscriptInput?: (transcript: string) => void;
onTranscriptResponse?: (transcript: string) => void;
}⚠️ These events serve both the text exchanges AND transcript of audio exchanges:
// Pseudo-code:
// Text:
constellationSession.sendText("Hello");
// Triggers:
// 1) onTranscriptInput(transcript) -> transcript is "Hello"
// 2) onTranscriptResponse(transcript) -> transcript is the response from the LLM
// Audio:
// ... Internally, constellationSession does 'send audio data + commit audio message' ...
// Triggers:
// 1) onTranscriptInput(transcript) -> transcript is "Hello"
// 2) onTranscriptResponse(transcript) -> transcript is the response from the LLMPartial text events are provided as well through onTranscriptInputPart and onTranscriptResponsePart events.
Partial text events are fired when a new piece of text is available, they allow for a more reactive/realtime, temporary display of
incoming text, but cannot be trusted to build the final message: always rely on the final "non-part" event as the final source of truth.
Session Configuration
System behavior can be updated dynamically, mid-session:
configureSession({
temperature: 0.2,
instructions: "You are a helpful coding assistant.",
maxResponseToken: 500
});This does NOT trigger a response.
Event Handlers
The event handlers are client-provided hooks to receive all the server events discussed above Provided at join session time.
joinSession() requires at least:
{
onStreamClosed: (reason: string) => void
}Optional handlers:
- onSessionConfigured: acknowledgment of configureSession call
- onResponseEnd: model has finished generating and streaming a response
- onTranscriptInput: the echo of a user text input or transcript of a user audio input
- onTranscriptInputPart: the echo of a user text input or a piece of transcript of a user audio input
- onTranscriptResponse: model text response or transcript of model audio response
- onTranscriptResponsePart: piece of model text response or transcript of model audio response
- onAgentResponse: in a session agents configured, this is the event carry an agent-feedback
- onClientAction: in a session with client action defined, this event is fired when a named action is called by the model
- onTechnicalError
- onLatencyUpdate: fired regularly with the last computed latency between the client and constellation
If omitted, events are ignored silently & safely.
