@cogstream/copilotkit
v0.3.0
Published
React integration layer for CogStream — wraps sensing, interpretation, and agent output in a developer-friendly API that works with [CopilotKit](https://copilotkit.ai).
Readme
@cogstream/copilotkit
React integration layer for CogStream — wraps sensing, interpretation, and agent output in a developer-friendly API that works with CopilotKit.
Installation
npm install @cogstream/copilotkitPeer dependencies: next >=14, react >=18, react-dom >=18
Quick start
Place <CopilotKit> above <CogStreamProvider> in your tree, then wrap your app:
import { CopilotKit } from '@copilotkit/react-core';
import { CogStreamProvider } from '@cogstream/copilotkit';
export default function Layout({ children }: { children: React.ReactNode }) {
return (
<CopilotKit runtimeUrl="/api/copilotkit">
<CogStreamProvider
apiKey={process.env.NEXT_PUBLIC_COGSTREAM_API_KEY!}
sessionId={crypto.randomUUID()}
>
{children}
</CogStreamProvider>
</CopilotKit>
);
}Get an API key by calling POST /admin/tenant on your CogStream instance.
Provider
<CogStreamProvider>
| Prop | Type | Required | Description |
|------|------|----------|-------------|
| apiKey | string | yes | Tenant API key (csk_yourapp_...) |
| sessionId | string | yes | Stable session identifier for this user |
| userState | UserStateModel | no | Override the active user state model |
| children | ReactNode | yes | |
Hooks
useVoiceControl(options?)
Manages voice input, output, and interruption detection.
import { useVoiceControl } from '@cogstream/copilotkit';
function VoiceButton() {
const { isRecording, startRecording, stopRecording } = useVoiceControl({
onSignal: (signal) => console.log('voice signal', signal),
});
return (
<button onPointerDown={startRecording} onPointerUp={stopRecording}>
{isRecording ? 'Listening...' : 'Hold to speak'}
</button>
);
}Returns { isRecording, startRecording(), stopRecording(), isSpeaking, cancelSpeech() }.
Voice activity detection fires speech_start / speech_end events per detected utterance within a recording session.
useCogStreamContext()
Access the provider context from anywhere in the tree.
import { useCogStreamContext } from '@cogstream/copilotkit';
function StatusBar() {
const { agentSpeaking, userSpeaking, userState } = useCogStreamContext();
return <div>{agentSpeaking ? 'Agent is speaking...' : null}</div>;
}Returns { apiKey, sessionId, userState, agentSpeaking, userSpeaking, setAgentSpeaking(), setUserSpeaking(), setUserState() }.
Generative UI
Register components for dynamic agent-driven rendering:
import { registerComponent, GenerativeRenderer } from '@cogstream/copilotkit';
registerComponent('HelpCard', HelpCardComponent);
registerComponent('StepGuide', StepGuideComponent);
// In your layout — renders whatever the agent decides to show
<GenerativeRenderer />Surface components
Pre-built surfaces for common agent UX patterns:
import {
CogStreamChat,
CogStreamSidebar,
CogStreamPopup,
} from '@cogstream/copilotkit';Voice adapters (advanced)
import {
startVAD,
createVoiceInput,
createVoiceOutput,
createInterruptionDetector,
} from '@cogstream/copilotkit';Notes
<CogStreamProvider>does not wrap<CopilotKit>— place<CopilotKit>above it- STT routes through Deepgram Nova-2 via your Next.js backend proxy
- TTS routes through ElevenLabs via your Next.js backend proxy
- Voice signals are emitted as
RawVoiceSignalobjects and merged into the episode stream by@cogstream/sensing
