@cogstream/sensing
v0.3.0
Published
Browser SDK for CogStream — captures UI and voice signals, segments them into behavioral episodes, and streams them to the CogStream interpretation service.
Readme
@cogstream/sensing
Browser SDK for CogStream — captures UI and voice signals, segments them into behavioral episodes, and streams them to the CogStream interpretation service.
Raw signals (pointer events, keystrokes, audio) never leave the browser. Only the derived EpisodeV2 object is transmitted.
Installation
npm install @cogstream/sensingQuick start
import { SensingRuntime } from '@cogstream/sensing';
const sensing = new SensingRuntime({
session_id: 'user-abc-123',
endpoint: 'https://your-cogstream-instance/episode',
apiKey: 'csk_yourapp_...',
});
sensing.start();
// Stop and flush on page unload
window.addEventListener('beforeunload', () => sensing.stop());Get an API key by calling POST /admin/tenant on your CogStream instance (requires COGSTREAM_ADMIN_KEY).
Local-only mode (no server)
Process episodes in the browser without sending them anywhere:
const sensing = new SensingRuntime({
session_id: 'user-abc-123',
onEpisode: (episode) => {
console.log('Episode:', episode.episode_type, episode.patterns);
},
});
sensing.start();Voice
const sensing = new SensingRuntime({
session_id: 'user-abc-123',
voice_enabled: true,
endpoint: 'https://your-cogstream-instance/episode',
apiKey: 'csk_yourapp_...',
});
sensing.start();
await sensing.enableVoice(); // requests microphone permissionSwap the default STT adapter:
import { WebSpeechSTTAdapter } from '@cogstream/sensing';
sensing.setSTTAdapter(new WebSpeechSTTAdapter());API
SensingRuntime
new SensingRuntime(config: SensingRuntimeConfig)| Option | Type | Required | Description |
|--------|------|----------|-------------|
| session_id | string | yes | Stable identifier for this user session |
| user_id | string | no | Optional authenticated user ID |
| voice_enabled | boolean | no | Start with voice capture enabled (default: false) |
| flushIntervalMs | number | no | Episode emission interval in ms (default: 500) |
| onEpisode | (episode: EpisodeV2) => void | one of | Local callback for each completed episode |
| endpoint | string | one of | CogStream service URL for remote episode submission |
| apiKey | string | if endpoint set | Bearer token for the remote endpoint |
| onPartialEpisode | (partial: PartialEpisode) => void | no | Called with in-flight state on each flush |
At least one of onEpisode or endpoint must be provided.
Methods:
| Method | Description |
|--------|-------------|
| start() | Begin signal capture |
| stop() | Stop capture and flush the final episode |
| enableVoice() | Start voice capture (requests mic permission) |
| disableVoice() | Stop voice capture |
| setSTTAdapter(adapter) | Replace the STT implementation |
| getCurrentEpisode() | Returns the in-progress PartialEpisode |
| getSessionId() | Returns the configured session_id |
Lightweight primitives (advanced)
For integration with custom pipelines:
import { createSignalCollector, createWindowingEngine } from '@cogstream/sensing';
const collector = createSignalCollector();
collector.attach(document.body);
const engine = createWindowingEngine();
const signals = collector.drain();
engine.feed(signals);
const ep = engine.buildEpisodeV2([], 'session-id');Captured signals
The SDK attaches passive listeners for: click, focus, blur, scroll, input (change), field validation errors, form submit errors, pointer move/down/up, and idle detection (3 s debounce).
