@classytic/react-stream
v1.2.2
Published
World-class React hooks for browser media device control (camera, microphone, screen share, WebRTC)
Downloads
1,033
Maintainers
Keywords
Readme
@classytic/react-stream
Production-ready React 19 hooks for building pro-tier video apps (Discord, Teams, Meet).
Built on useSyncExternalStore for perfect hydration and zero-tearing state management.
Table of Contents
- Installation
- Quick Start
- Architecture
- Core Hooks
- WebRTC Integration
- Common Pitfalls
- API Reference
- Browser Support
Installation
pnpm add @classytic/react-stream
# or
npm install @classytic/react-streamQuick Start
import { useMediaManager } from "@classytic/react-stream";
function Room() {
const {
camera,
microphone,
cameraStream,
isInitialized,
initialize,
toggleCamera,
toggleMicrophone,
switchAudioDevice,
switchVideoDevice,
getVideoTrack,
getAudioTrack,
} = useMediaManager();
return (
<div>
{!isInitialized ? (
<button onClick={initialize}>Start Media</button>
) : (
<>
<video
ref={(el) => el && (el.srcObject = cameraStream)}
autoPlay
muted
playsInline
/>
<button onClick={toggleCamera}>
{camera.status === "active" ? "Camera Off" : "Camera On"}
</button>
<button onClick={toggleMicrophone}>
{microphone.trackEnabled ? "Mute" : "Unmute"}
</button>
</>
)}
</div>
);
}Architecture
Store Pattern with useSyncExternalStore
This library uses React 19's useSyncExternalStore for state management, ensuring:
- Zero tearing - State is always consistent across concurrent renders
- SSR-safe - Proper hydration with
getServerSnapshot - Granular subscriptions - Only re-render components that need updates
┌─────────────────────────────────────────────────────────────────┐
│ MediaStore │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Microphone │ │ Camera │ │ Screen │ │
│ │ Stream │ │ Stream │ │ Share │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ └───────────────┼───────────────┘ │
│ ▼ │
│ ┌─────────────────────┐ │
│ │ useSyncExternalStore │ │
│ └─────────────────────┘ │
│ │ │
└─────────────────────────┼───────────────────────────────────────┘
▼
┌─────────────────────┐
│ React Components │
└─────────────────────┘Callback Stability
All action callbacks (toggleCamera, toggleMicrophone, etc.) are stable references that never change. This means:
- Safe to use in
useEffectdependencies - Safe to pass to child components without
useCallbackwrapper - No infinite re-render loops from callback identity changes
// ✅ Safe - callbacks are stable
useEffect(() => {
if (someCondition) {
toggleCamera();
}
}, [someCondition, toggleCamera]); // toggleCamera never changesState vs Refs Pattern
The library uses refs internally for mutable state that shouldn't trigger re-renders:
// Internal pattern - refs for control flow, state for UI
const isActiveRef = useRef(false); // For internal checks
const [isActive, setIsActive] = useState(false); // For UI updatesCore Hooks
useMediaManager
The main orchestration hook for camera, microphone, and screen share.
const {
// State (reactive - triggers re-renders)
camera, // { status, stream, trackEnabled, error }
microphone, // { status, stream, trackEnabled, error }
screen, // { status, stream, trackEnabled, error }
cameraStream, // MediaStream | null
screenStream, // MediaStream | null
audioLevel, // number (0-100)
isSpeaking, // boolean
isInitialized, // boolean
isInitializing, // boolean
// Actions (stable - never change identity)
initialize, // () => Promise<boolean>
toggleMicrophone, // () => void
toggleCamera, // () => Promise<void>
toggleScreenShare,// () => Promise<void>
switchAudioDevice,// (deviceId: string) => Promise<boolean>
switchVideoDevice,// (deviceId: string) => Promise<boolean>
cleanup, // () => void
// Track access (for WebRTC)
getVideoTrack, // () => MediaStreamTrack | null
getAudioTrack, // () => MediaStreamTrack | null
} = useMediaManager(options);Options:
interface UseMediaManagerOptions {
videoConstraints?: MediaTrackConstraints | false; // false = no camera
audioConstraints?: MediaTrackConstraints | false; // false = no mic
screenShareOptions?: DisplayMediaStreamOptions;
autoInitialize?: boolean; // Auto-request permissions on mount
// Callbacks
onMicrophoneChange?: (state: DeviceState) => void;
onCameraChange?: (state: DeviceState) => void;
onScreenShareChange?: (state: DeviceState) => void;
onAudioLevel?: (data: AudioLevelData) => void;
onError?: (type: MediaDeviceType, error: Error) => void;
}useDevices
Enumerate available media devices.
const {
videoInputs, // DeviceInfo[] - cameras
audioInputs, // DeviceInfo[] - microphones
audioOutputs, // DeviceInfo[] - speakers
allDevices, // DeviceInfo[] - all devices
isLoading, // boolean
error, // Error | null
refresh, // () => Promise<void>
} = useDevices();useAudioAnalyzer
Real-time audio level monitoring with voice activity detection.
const {
level, // number (0-100) - normalized audio level
raw, // number - raw FFT average
isSpeaking, // boolean - above threshold
isActive, // boolean - analyzer running
start, // () => void
stop, // () => void
} = useAudioAnalyzer(stream, {
fftSize: 256,
smoothingTimeConstant: 0.8,
speakingThreshold: 5,
updateInterval: 100, // ms
});WebRTC Integration
With LiveKit
import { useMediaManager } from "@classytic/react-stream";
import { useLocalParticipant, useTracks } from "@livekit/components-react";
function LiveKitRoom() {
const { getVideoTrack, getAudioTrack, toggleCamera, toggleMicrophone } =
useMediaManager({ autoInitialize: true });
const { localParticipant } = useLocalParticipant();
// Publish tracks to LiveKit
useEffect(() => {
const videoTrack = getVideoTrack();
const audioTrack = getAudioTrack();
if (videoTrack && localParticipant) {
localParticipant.publishTrack(videoTrack, { name: 'camera' });
}
if (audioTrack && localParticipant) {
localParticipant.publishTrack(audioTrack, { name: 'microphone' });
}
}, [localParticipant, getVideoTrack, getAudioTrack]);
return (
<div>
<button onClick={toggleCamera}>Toggle Camera</button>
<button onClick={toggleMicrophone}>Toggle Mic</button>
</div>
);
}With Raw RTCPeerConnection
import { useMediaManager } from "@classytic/react-stream";
function WebRTCCall() {
const { getVideoTrack, getAudioTrack, isInitialized } = useMediaManager();
const pcRef = useRef<RTCPeerConnection | null>(null);
const sendersRef = useRef<Map<string, RTCRtpSender>>(new Map());
// Setup peer connection
useEffect(() => {
pcRef.current = new RTCPeerConnection({
iceServers: [{ urls: 'stun:stun.l.google.com:19302' }]
});
return () => pcRef.current?.close();
}, []);
// Add tracks when ready
useEffect(() => {
if (!isInitialized || !pcRef.current) return;
const videoTrack = getVideoTrack();
const audioTrack = getAudioTrack();
if (videoTrack) {
const sender = pcRef.current.addTrack(videoTrack);
sendersRef.current.set('video', sender);
}
if (audioTrack) {
const sender = pcRef.current.addTrack(audioTrack);
sendersRef.current.set('audio', sender);
}
}, [isInitialized, getVideoTrack, getAudioTrack]);
// Handle track replacement (e.g., device switch)
const replaceTrack = async (kind: 'video' | 'audio') => {
const sender = sendersRef.current.get(kind);
const track = kind === 'video' ? getVideoTrack() : getAudioTrack();
if (sender && track) {
await sender.replaceTrack(track);
}
};
return <div>...</div>;
}With Daily.co
import { useMediaManager } from "@classytic/react-stream";
import { useDaily } from "@daily-co/daily-react";
function DailyRoom() {
const daily = useDaily();
const { getVideoTrack, getAudioTrack, switchVideoDevice } = useMediaManager();
// Update Daily when tracks change
useEffect(() => {
if (!daily) return;
const videoTrack = getVideoTrack();
if (videoTrack) {
daily.setLocalVideo(true);
}
}, [daily, getVideoTrack]);
// Switch camera
const handleCameraSwitch = async (deviceId: string) => {
await switchVideoDevice(deviceId);
// Daily will automatically pick up the new track
};
return <div>...</div>;
}Common Pitfalls
1. Creating MediaStream in Render
Problem: Creating new MediaStream() during render causes infinite loops.
// ❌ BAD - creates new MediaStream every render
function BadExample({ track }) {
const stream = new MediaStream([track]); // New object every render!
return <video srcObject={stream} />;
}
// ✅ GOOD - memoize the MediaStream
function GoodExample({ track }) {
const stream = useMemo(
() => track ? new MediaStream([track]) : null,
[track]
);
return <video srcObject={stream} />;
}2. Object Options in Dependencies
Problem: Inline objects change identity every render.
// ❌ BAD - options object changes every render
function BadExample() {
const { level } = useAudioAnalyzer(stream, {
fftSize: 256, // New object every render!
});
}
// ✅ GOOD - stable options reference
const ANALYZER_OPTIONS = { fftSize: 256 };
function GoodExample() {
const { level } = useAudioAnalyzer(stream, ANALYZER_OPTIONS);
}
// ✅ ALSO GOOD - useMemo for dynamic options
function GoodExample2({ fftSize }) {
const options = useMemo(() => ({ fftSize }), [fftSize]);
const { level } = useAudioAnalyzer(stream, options);
}3. Forgetting Cleanup
Problem: Media tracks keep running after component unmount.
// ❌ BAD - tracks leak
function BadExample() {
const { initialize } = useMediaManager();
useEffect(() => { initialize(); }, []);
// No cleanup!
}
// ✅ GOOD - cleanup in useEffect
function GoodExample() {
const { initialize, cleanup } = useMediaManager();
useEffect(() => {
initialize();
return () => cleanup(); // Stop tracks on unmount
}, [initialize, cleanup]);
}4. Using State in Callbacks That Set State
Problem: Using state in a callback's dependencies when that callback sets the same state.
// ❌ BAD - infinite loop
const start = useCallback(() => {
if (isActive) return; // Depends on isActive
setIsActive(true); // Sets isActive
}, [isActive]); // isActive changes → start changes → effect runs
// ✅ GOOD - use ref for internal checks
const isActiveRef = useRef(false);
const start = useCallback(() => {
if (isActiveRef.current) return; // Check ref
isActiveRef.current = true;
setIsActive(true); // State for UI only
}, []); // Stable callback5. Not Handling Device Disconnection
Problem: Camera/mic gets unplugged but UI doesn't update.
// ✅ GOOD - handle device changes
function GoodExample() {
const { camera, microphone } = useMediaManager({
autoSwitchDevices: true, // Auto-reacquire on disconnect
onError: (type, error) => {
console.error(`${type} error:`, error);
// Show user notification
},
});
// Check for ended tracks
if (camera.status === 'error') {
return <div>Camera disconnected: {camera.error}</div>;
}
}6. Screen Share Audio
Problem: Forgetting to include system audio in screen share.
// ✅ GOOD - request system audio
const { startScreenShare } = useMediaManager({
screenShareOptions: {
video: true,
audio: true, // Include system audio (tab audio)
},
});API Reference
Device Status
type DeviceStatus =
| 'idle' // Not started
| 'acquiring' // Requesting permission
| 'active' // Track is live and enabled
| 'muted' // Track is live but disabled
| 'stopped' // Track was stopped (camera off)
| 'error'; // Error occurredDeviceState
interface DeviceState {
status: DeviceStatus;
stream: MediaStream | null;
trackEnabled: boolean;
error: string | null;
}Subpath Imports (Tree-Shaking)
// Only import what you need for smaller bundles
import { useDevices } from '@classytic/react-stream/devices';
import { useConstraints, QUALITY_PRESETS } from '@classytic/react-stream/constraints';
import { useScreenShare } from '@classytic/react-stream/screen';
import { useAudioAnalyzer } from '@classytic/react-stream/audio';
import { useTrackPublisher } from '@classytic/react-stream/webrtc';
import { useNoiseSuppression } from '@classytic/react-stream/fx/audio';
import { useWorkerProcessor } from '@classytic/react-stream/fx/processor';
import { MediaProvider, useMediaContext } from '@classytic/react-stream/context';AI & Processing
Audio Noise Suppression (WASM)
import { useNoiseSuppression } from "@classytic/react-stream/fx/audio";
function NoiseControl({ micTrack }) {
const ns = useNoiseSuppression({
wasmUrl: "/models/rnnoise.wasm",
onReady: () => console.log("NS ready"),
onError: (err) => console.error(err),
});
// Start processing
const enableNS = () => ns.start(micTrack);
// Use ns.processedTrack for WebRTC
return (
<button onClick={enableNS} disabled={ns.isActive}>
{ns.isActive ? "NS Active" : "Enable Noise Suppression"}
</button>
);
}Video Processing (Off-Thread)
import { useWorkerProcessor } from "@classytic/react-stream/fx/processor";
function BackgroundBlur({ videoTrack }) {
const processor = useWorkerProcessor({
workerUrl: "/workers/blur-worker.js",
config: { blurRadius: 15 },
onReady: () => console.log("Worker ready"),
});
// processor.processedTrack is the blurred video
return (
<button onClick={() => processor.start(videoTrack)}>
Enable Background Blur
</button>
);
}Browser Support
| Feature | Chrome | Firefox | Safari | Edge | | ----------------- | ------ | ------- | ------ | ---- | | Core Media | 74+ | 66+ | 14+ | 79+ | | Worker Processing | 94+ | - | - | 94+ | | WebTransport | 97+ | 114+ | - | 97+ | | WebCodecs | 94+ | 130+ | 16.4+ | 94+ | | AudioWorklet | 66+ | 76+ | 14.1+ | 79+ |
Debug Mode
Enable debug logging to see internal state changes:
import { enableDebug, disableDebug } from "@classytic/react-stream";
// In development
if (process.env.NODE_ENV === 'development') {
enableDebug();
}
// Or enable specific loggers
enableDebug('useMediaManager');
enableDebug('createMediaStore');Agent Skill
This package includes an agent skill for AI coding agents (Claude Code, Cursor, Copilot, Cline, etc.):
npx skills add classytic/rtc --skill react-streamThe skill provides context-aware guidance for using the library — API patterns, WebRTC integration, common pitfalls, and browser support.
License
MIT
