npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

vocallabsai-sdk

v1.1.7

Published

React Native SDK for VocalLabs audio calls with direct WebSocket connection

Readme

vocallabsai-sdk

React Native SDK for real-time VocalLabs voice calls over WebSocket.

Setup

1) Install

npm install vocallabsai-sdk

2) Install peer dependencies (if your app does not already have them)

Peer dependencies used by this SDK:

  • react
  • react-native
  • react-native-audio-api
  • base-64

3) iOS setup

Add the native pod to your ios/Podfile inside the target block:

pod 'VocalLabsAudioEffects', :path => '../node_modules/vocallabsai-sdk/ios'

Then run:

cd ios && pod install

Add the microphone permission to ios/<YourApp>/Info.plist:

<key>NSMicrophoneUsageDescription</key>
<string>This app needs access to your microphone for voice calls.</string>

4) Android permissions

Add these permissions in your app AndroidManifest.xml:

<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.BLUETOOTH_CONNECT" />

Also request RECORD_AUDIO permission at runtime.

5) Linking

For React Native 0.60+, autolinking should work automatically.

If not linked, add manual linking:

android/settings.gradle

include ':vocallabs-audio-effects'
project(':vocallabs-audio-effects').projectDir = new File(rootProject.projectDir, '../node_modules/vocallabsai-sdk/android')

android/app/build.gradle

dependencies {
  implementation project(':vocallabs-audio-effects')
}

6) Rebuild app

cd android && ./gradlew clean

Features

  • Direct WebSocket call connection
  • Real-time microphone streaming + remote playback
  • Built-in mute/unmute and volume control
  • Speaker / earpiece toggle on Android
  • Event-driven API for connection and call state
  • Live stats for sent/received audio
  • TypeScript support out of the box
  • Android: MODE_IN_COMMUNICATION + STREAM_VOICE_CALL audio routing — echo cancellation, hardware volume buttons, Bluetooth HFP, speaker toggle
  • iOS: allowBluetoothHFP audio session option for Bluetooth headset support

Quick Start

import VocalLabsSDK from 'vocallabsai-sdk';

const sdk = new VocalLabsSDK({
  sampleRate: 8000,
  enableLogs: true,
});

sdk.on('onAudioConnected', () => {
  console.log('Audio connected');
});

sdk.on('onAudioDisconnected', () => {
  console.log('Audio disconnected');
});

sdk.on('onUserConnected', (connected) => {
  console.log('User connected:', connected);
});

sdk.on('onMuteChanged', (isMuted) => {
  console.log('Muted:', isMuted);
});

sdk.on('onError', (error) => {
  console.error('SDK error:', error);
});

await sdk.connect('wss://rupture2.vocallabs.ai/ws?callId=test-call-123&sampleRate=8000');

sdk.toggleMute();
sdk.setVolume(0.9);

const stats = sdk.getStats();
console.log(stats);

sdk.disconnect();

Configuration

interface SDKConfig {
  sampleRate?: number;   // default: 8000
  enableLogs?: boolean;  // default: true
  audioProcessing?: {
    mode?: 'off' | 'balanced' | 'aggressive';
    remoteActiveWindowMs?: number;
    noiseGateQuiet?: number;
    noiseGateRemote?: number;
    halfDuplexRms?: number;
    halfDuplexPeak?: number;
    duckLow?: number;
    duckHigh?: number;
    duckPivotRms?: number;
    dcBlockerR?: number;
  };
}

Core API

Connection

await sdk.connect(websocketUrl: string);
sdk.disconnect();

Mic + Playback Controls

const muted = sdk.toggleMute();
sdk.setVolume(0.0 - 1.0);

State + Stats

const state = sdk.getState();
const stats = sdk.getStats();
const call = sdk.getCurrentCall();

Cleanup

await sdk.dispose();

Native Audio (Android)

The SDK uses AudioManager.MODE_IN_COMMUNICATION and requests audio focus on STREAM_VOICE_CALL. This gives you:

  • Echo cancellationMODE_IN_COMMUNICATION enables hardware AEC automatically
  • Hardware volume buttons — control call volume via STREAM_VOICE_CALL
  • Speaker / earpiece togglesetSpeakerphoneOn via the SDK
  • Bluetooth HFP — audio routed through Bluetooth headsets when connected

Speaker Toggle

// Switch to loudspeaker
await sdk.setSpeakerphone(true);

// Switch back to earpiece
await sdk.setSpeakerphone(false);

Audio Effects (AEC / NS / AGC)

Fine-grained control over hardware audio processing:

await sdk.setAcousticEchoCanceler(true);
await sdk.setNoiseSuppressor(true);
await sdk.setAutomaticGainControl(true);

const available = sdk.isNativeAudioEffectsAvailable();
const status = await sdk.getNativeAudioEffectsStatus();

Example status object:

{
  aecAvailable: true,
  aecEnabled: true,
  nsAvailable: true,
  nsEnabled: true,
  agcAvailable: true,
  agcEnabled: true,
  audioSessionId: 123
}

Events

Supported events:

  • onAudioConnected
  • onAudioDisconnected
  • onUserConnected
  • onUserDisconnected
  • onMuteChanged
  • onStatsUpdate
  • onError
  • onLog

Example:

sdk.on('onStatsUpdate', ({ audio, sending }) => {
  console.log('Queue:', audio.queueSize);
  console.log('Sent chunks:', sending.sentChunks);
});

Android Notes

  • Grant RECORD_AUDIO permission at runtime.
  • Keep MODIFY_AUDIO_SETTINGS in AndroidManifest.
  • For Bluetooth headset support, add BLUETOOTH / BLUETOOTH_CONNECT permissions.
  • The SDK sets MODE_IN_COMMUNICATION on call start and resets to MODE_NORMAL on stop.
  • Prefer autolinking first; use manual linking only if needed.

Minimal React Native Example

import React, { useEffect, useRef, useState } from 'react';
import { View, Button, Text } from 'react-native';
import VocalLabsSDK from 'vocallabsai-sdk';

export default function CallScreen() {
  const sdkRef = useRef<VocalLabsSDK | null>(null);
  const [connected, setConnected] = useState(false);
  const [muted, setMuted] = useState(false);
  const [speaker, setSpeaker] = useState(false);

  useEffect(() => {
    const sdk = new VocalLabsSDK({ sampleRate: 8000, enableLogs: true });

    sdk.on('onAudioConnected', () => setConnected(true));
    sdk.on('onAudioDisconnected', () => setConnected(false));
    sdk.on('onMuteChanged', (m) => setMuted(m));

    sdkRef.current = sdk;

    return () => {
      sdk.dispose();
    };
  }, []);

  const start = async () => {
    await sdkRef.current?.connect('wss://rupture2.vocallabs.ai/ws?callId=test-call-123&sampleRate=8000');
  };

  const end = () => sdkRef.current?.disconnect();
  const toggle = () => sdkRef.current?.toggleMute();
  const toggleSpeaker = async () => {
    const next = !speaker;
    await sdkRef.current?.setSpeakerphone(next);
    setSpeaker(next);
  };

  return (
    <View>
      <Text>Connected: {connected ? 'Yes' : 'No'}</Text>
      <Text>Muted: {muted ? 'Yes' : 'No'}</Text>
      <Text>Speaker: {speaker ? 'On' : 'Earpiece'}</Text>
      <Button title="Start" onPress={start} />
      <Button title="Toggle Mute" onPress={toggle} />
      <Button title="Toggle Speaker" onPress={toggleSpeaker} />
      <Button title="End" onPress={end} />
    </View>
  );
}

License

MIT