@simfinity/constellation-ui
v1.0.12
Published
React bindings for Simfinity Constellation — persistent real-time LLM chat rooms (text + audio).
Downloads
916
Readme
@simfinity/constellation-ui
React bindings for Simfinity Constellation — persistent real-time LLM chat rooms (text + audio).
Overview
Constellation provides:
- An abstraction layer to interact with LLMs
- Persistent server-side sessions
- WebSocket real-time streaming
- Text + audio conversations
- Configurable system instructions & session settings
- Reconnectable chat rooms
This package is a React wrapper around:
@simfinity/constellation-client
It provides context + hooks for lifecycle management.
Installation
npm install @simfinity/constellation-ui
# or
yarn add @simfinity/constellation-ui
# Dependency:
npm install @simfinity/constellation-client
# or
yarn add @simfinity/constellation-clientArchitecture
Session lifecycle:
startSession() → REST call
joinSession() → WebSocket connection
configureSession() (optional) → WebSocket messages
sendText() / sendAudioChunk() → WebSocket messages
endSession() → REST call⚠️ A session MUST be started before it can be joined.
⚠️ A session SHOULD be ended when finished.
Minimal Working Example (Text-Only)
import React, { useEffect } from "react";
import WebClient from "@simfinity/constellation-client";
import {
ConstellationProvider,
useConstellationClient
} from "@simfinity/constellation-ui";
const client = new WebClient({
sessionEndpoint: "https://your-api",
streamingEndpoint: "wss://your-stream",
key: "YOUR_SECRET_KEY",
llm: "openai",
model: "gpt-4o-realtime-preview-2024-12-17"
});
function Chat() {
const {
startSession,
joinSession,
sendText,
endSession
} = useConstellationClient();
useEffect(() => {
async function init() {
await startSession(false);
await joinSession(false, {
onStreamClosed: console.log,
onTranscriptResponse: (msg) => {
console.log("Model:", msg);
}
});
sendText("Hello!");
}
init();
return () => {
endSession();
};
}, []);
return <div>Chat running...</div>;
}
export default function App() {
return (
<ConstellationProvider client={client}>
<Chat />
</ConstellationProvider>
);
}Audio Mode
To enable audio:
// Create a audio-enabled session
await startSession(true, "alloy");
// Join a stream subscribing to audio events
await joinSession(true, {
onStreamClosed: console.log,
onAudioResponseStart: () => console.log("Speaking..."),
onAudioResponseChunk: (chunk) => audioPlayer.enqueue(chunk),
onAudioResponseEnd: () => console.log("Done")
});Audio requirements:
- Format: PCM16, 16k Hertz
- Encoding: Base64
- Transcription: handled server-side
Send audio:
sendAudioChunk(base64PcmChunk);
commitAudio();commitAudio() is optional (silence auto-flushes).
⚠️ There is server-side silence detection to trigger a model response, however for optimal use, the client should implement audio-input noise detection to avoid continuously streaming audio data.
Text and Transcript
Text is always enabled in a session. However, the client must provide the appropriate handlers to receive events:
interface EventHandlers {
// ...
onTranscriptInput?: (transcript: string) => void;
onTranscriptResponse?: (transcript: string) => void;
}⚠️ These events serve both the text exchanges AND transcript of audio exchanges:
// Pseudo-code:
// Text:
constellationClient.sendText("Hello");
// Triggers:
// 1) onTranscriptInput(transcript) -> transcript is "Hello"
// 2) onTranscriptResponse(transcript) -> transcript is the response from the LLM
// Audio:
constellationClient.sendAudioChunk("... PCM16 audio data for 'Hello'...");
constellationClient.commitAudio();
// Triggers:
// 1) onTranscriptInput(transcript) -> transcript is "Hello"
// 2) onTranscriptResponse(transcript) -> transcript is the response from the LLMSession Configuration
System behavior can be updated dynamically, mid-session:
configureSession({
temperature: 0.2,
instructions: "You are a helpful coding assistant.",
maxResponseToken: 500
});This does NOT trigger a response.
Event Handlers
joinSession() requires at least:
{
onStreamClosed: (reason: string) => void
}Optional handlers:
- onSessionConfigured: acknowledgment of configureSession call
- onAudioResponseStart: model is being to stream an audio response
- onAudioResponseChunk: audio data chunk of a model audio response
- onAudioResponseEnd: model has finished streaming an audio response
- onTranscriptInput: follows a text or audio input from the client
- onTranscriptResponse: model text response or transcript of model audio response
- onTechnicalError
If omitted, events are ignored silently & safely.
