npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

valenceai

v1.0.6

Published

Node.js SDK for Valence AI Emotion Detection API - Real-time, Async, and Streaming Support

Downloads

190

Readme

Valence SDK for Emotion Detection

valenceai is a Node.js SDK for interacting with the Valence AI API for emotion analysis. It provides a convenient interface to upload audio files, stream real-time audio, and retrieve detected emotional states.

Features

  • Discrete audio processing - Real-time analysis for short audio clips
  • Asynch audio processing - Multipart parallel upload for long audio files with temporal emotion analysis
  • Streaming API - Real-time WebSocket streaming for live audio
  • Rate limiting - Monitor API usage and limits
  • Environment configuration - Built-in support for .env files
  • Enhanced logging - Configurable log levels with timestamps
  • TypeScript ready - Full JSDoc documentation for all functions

The emotional classification model used in our APIs is optimized for North American English conversational data. The included model detects four emotions: angry, happy, neutral, and sad. New models coming soon.

API Overview

| API | Best For | Input | Output | |-----|----------|-------|--------| | Discrete | Real-time analysis | Short audio (4-10s) | Single emotion prediction | | Asynch | Pre-recorded files | Long audio (up to 1GB) | Timeline with emotion changes | | Streaming | Live audio streams | Audio chunks via WebSocket | Real-time emotion updates |

The DiscreteAPI is built for real-time analysis of emotions in audio data. Small snippets of audio are sent to the API to receive feedback in real-time of what emotions are detected based on tone of voice. This API operates on an approximate per-sentence basis, and audio must be cut to the appropriate size.

The AsynchAPI is built for emotion analysis of pre-recorded audio files. Files of any length, up to 1 GB in size, can be sent to the API to receive a timeline of emotions throughout the file.

The StreamingAPI is built for real-time audio analysis via WebSocket connections. The audio stream is analyzed in real-time and emotions are returned in reference to 5-second chunks of streamed audio.

Audio Input Requirements

Format Specifications

  • Format: WAV only
  • Recommended sampling rate: 44.1 kHz (44100 Hz)
  • Minimum sampling rate: 8 kHz
  • Channel: Mono (single channel)

API-Specific Requirements

  • Discrete API: Minimum 4.5 seconds per file, maximum 15 seconds. 5-10 seconds recommended.
  • Asynch API: Minimum 5 seconds, maximum 1 GB
  • Streaming API: Real-time audio chunks (Buffer or ArrayBuffer)

For inquiries about custom microphone specifications or stereo/multi-channel support, please contact us.

Installation

npm install valenceai

Configuration

Environment Variables

Create a .env file in your project root:

VALENCE_API_KEY=your_api_key                       # Required
VALENCE_API_BASE_URL=https://api.getvalenceai.com  # Optional
VALENCE_WEBSOCKET_URL=wss://api.getvalenceai.com   # Optional
VALENCE_LOG_LEVEL=info                             # Optional: debug, info, warn, error

Client Configuration

const client = new ValenceClient({
  apiKey: 'your_api_key',           // API key (required)
  baseUrl: 'https://custom.api',    // Custom API endpoint (optional)
  websocketUrl: 'wss://custom.api', // Custom WebSocket endpoint (optional)
  partSize: 5 * 1024 * 1024,        // Upload chunk size (default: 5MB)
  maxRetries: 3,                    // Max retry attempts (default: 3)
  comprehensiveOutput: false        // When false: asynch API returns timestamp, main_emotion, confidence only.
                                    // When true: also includes all_predictions with all emotion confidences (default: false)
});

Asynch API Processing Workflow

The Asynch API uses a multi-step process to handle long audio files. Understanding this workflow is crucial for proper implementation:

1. Upload Phase (Client-Side)

When you call client.asynch.upload(filePath):

  • SDK splits your file into parts (5MB chunks by default)
  • Uploads parts in parallel
  • Returns a requestId - This is a tracking identifier, not a completion signal.
  • At this point: File is uploaded to our server, but NOT processed yet.

2. Background Processing (Server-Side)

After upload completes, the server automatically:

  • Checks for new uploads
  • Downloads audio when a new File is detected
  • Splits audio into 5-second segments
  • Processes audio file
  • Invokes machine learning model for emotion detection
  • Stores results in database
  • Updates status to completed

Processing Time: Varies based on file length and server load. Typically 1-5 seconds per minute of audio. Upload time varies based on your network speed.

3. Results Retrieval (Client-Side)

When you call client.asynch.emotions(requestId):

  • Polls the status endpoint at regular intervals
  • Waits for status progression:
    • initiated → Upload started
    • upload_completed → File uploaded (processing not started)
    • processing → Background processing in progress
    • completed → Results ready
  • Returns emotion timeline when status is completed

Status Values

| Status | Meaning | What's Happening | |--------|---------|------------------| | initiated | Upload started | SDK is uploading file in parts | | upload_completed | Upload finished | File is waiting for background processor | | processing | Processing active | Server is analyzing audio | | completed | Results ready | Emotion timeline is available |

Important Notes

  • The requestId is NOT a completion indicator. It's a request tracking ID.
  • upload() completing does not mean results are ready. It means the file is uploaded.
  • Background processing takes time. Processing time varies based on file length and server load.
  • You can check status anytime. The requestId remains valid for retrieving results until databases are cleared (see: DPA for more information on data retention policies).

Quick Start

import { ValenceClient } from 'valenceai';

// Initialize client
const client = new ValenceClient({ apiKey: 'your_api_key' });

// Discrete API - Quick emotion detection
const result = await client.discrete.emotions('short_audio.wav');
console.log(`Emotion: ${result.main_emotion}`);

// Asynch API - Long audio with timeline
// Step 1: Upload file (returns tracking ID)
const requestId = await client.asynch.upload('long_audio.wav');
// Step 2: Wait for server processing and get results (polls until complete)
const emotions = await client.asynch.emotions(requestId, 30, 10000);
// Step 3: Access emotion data from results
const emotionList = emotions.emotions;  // List of emotion predictions with timestamps

// Get summary statistics
const majority = await client.asynch.majorityEmotion(requestId);  // Most frequent emotion
const counts = await client.asynch.emotionCounts(requestId);  // { happy: 10, sad: 3, ... }

// Streaming API - Real-time audio
const stream = client.streaming.connect();
stream.on('prediction', (data) => console.log(data.main_emotion));
stream.connect();
stream.sendAudio(audioBuffer);
stream.disconnect();

// Rate Limit API - Monitor usage
const status = await client.rateLimit.getStatus();
const health = await client.rateLimit.getHealth();

API Reference

Discrete API

For short audio files requiring immediate emotion detection.

// Direct file upload
const result = await client.discrete.emotions('audio.wav');

// Upload via in-memory audio array
const result = await client.discrete.emotions([0.17278, 0.23738, 0.37912, ...]);

Response:

{
  emotions: {
    happy: 0.78,
    sad: 0.12,
    angry: 0.08,
    neutral: 0.15
  },
  main_emotion: 'happy'
}

Asynch API

For long audio files with timeline analysis.

Status Progression: initiatedupload_completedprocessingcompleted

Upload Audio

// Upload file (multipart upload, automatically validates file size)
const requestId = await client.asynch.upload('long_audio.wav');

Note: The SDK automatically validates file size against your rate limit policy before upload. If the file exceeds the maximum allowed size, a FileSizeLimitExceededError is thrown without attempting the upload. Default maximum is 1GB when no rate limit policy is configured.

Get Emotion Results

// Poll for results until processing completes
const result = await client.asynch.emotions(
  requestId,
  20,    // maxTries (default: 20, range: 1-100)
  5000   // intervalMs (default: 5000, range: 1000-60000)
);
// This method waits for server processing to complete
// Returns when status is 'completed'

Response:

{
  emotions: [
    {
      timestamp: 0.5,
      start_time: 0.0,
      end_time: 1.0,
      emotion: 'happy',
      confidence: 0.9,
      all_predictions: { happy: 0.9, sad: 0.1, ... }
    },
    {
      timestamp: 1.5,
      start_time: 1.0,
      end_time: 2.0,
      emotion: 'neutral',
      confidence: 0.85,
      all_predictions: { neutral: 0.85, happy: 0.15, ... }
    }
  ],
  status: 'completed'
}

Note: The all_predictions field is only included when comprehensiveOutput: true is set in the client constructor.

Helper Methods

// Get the most frequently occurring emotion across the entire file
const majority = await client.asynch.majorityEmotion(requestId);
// Returns: "happy"

// Get emotion occurrence counts for the entire file
const counts = await client.asynch.emotionCounts(requestId);
// Returns: { happy: 10, sad: 3, angry: 8, neutral: 9 }

Streaming API

For real-time emotion detection on live audio streams.

// Create streaming connection
const stream = client.streaming.connect();

// Register event handlers
stream.on('prediction', (data) => {
  console.log(`Emotion: ${data.main_emotion}`);
});

stream.on('error', (error) => {
  console.error(`Error: ${error.message}`);
});

stream.on('connected', (info) => {
  console.log(`Connected: ${info.session_id}`);
});

// Connect to WebSocket
await stream.connect();

// Send audio chunks (Buffer or ArrayBuffer)
stream.sendAudio(audioBuffer);

// Check connection status
if (stream.connected) {
  console.log('Streaming active');
}

// Disconnect
stream.disconnect();

Prediction Event:

{
  main_emotion: 'happy',
  confidence: 0.87,
  all_predictions: {
    happy: 0.87,
    sad: 0.05,
    angry: 0.03,
    neutral: 0.15
  },
  timestamp: 1706486400000 // Unix timestamp (UTC) in milliseconds
}

The timestamp is a Unix timestamp (UTC) in milliseconds representing when the server generated the prediction.

Rate Limit API

Monitor your API usage and limits.

// Get rate limit status
const status = await client.rateLimit.getStatus();
console.log(status);
// {
//   policy_name: 'standard_policy',
//   limits: {
//     requests_per_second: 10,
//     requests_per_minute: 100,
//     requests_per_hour: 1000,
//     requests_per_day: 10000,
//     burst_limit: 20,
//     max_audio_size_mb: 50,           // Maximum file size in MB
//     max_audio_duration_seconds: 300, // Maximum audio duration
//     max_concurrent_requests: 5
//   },
//   current_usage: {
//     requests_per_second: 2,
//     rejected_per_second: 0,
//     total_audio_size_bytes_per_second: 1048576,
//     requests_per_minute: 15,
//     rejected_per_minute: 0,
//     total_audio_size_bytes_per_minute: 15728640
//     // ... usage for hour and day
//   }
// }

// Check API health
const health = await client.rateLimit.getHealth();
console.log(health);
// { status: 'healthy', timestamp: 1738684800 }

The reset and timestamp values are Unix timestamps (UTC) in seconds.

Error Responses

Discrete API Errors

| HTTP Status | Error Code | Description | |-------------|------------|-------------| | 400 | AUDIO_TOO_SHORT | Audio duration below minimum (4.5 seconds). Response includes min_duration_seconds and actual_duration_seconds | | 400 | AUDIO_TOO_LONG | Audio duration above maximum (15 seconds). Response includes max_duration_seconds and actual_duration_seconds | | 400 | Bad Request | Invalid request format or parameters | | 401 | Unauthorized | Invalid or missing API key | | 500 | Server Error | Internal server error |

Asynch API Errors

| HTTP Status | Error Code | Description | |-------------|------------|-------------| | 400 | AUDIO_TOO_SHORT | Audio duration below minimum (5 seconds) | | 400 | FILE_SIZE_LIMIT_EXCEEDED | File size exceeds rate limit policy maximum. Raised before upload attempt | | 400 | FILE_TOO_LARGE | File exceeds maximum upload size (1 GB). Response includes max_file_size_bytes and actual_file_size_bytes | | 400 | Bad Request | Invalid request format or parameters | | 401 | Unauthorized | Invalid or missing API key | | 404 | Not Found | Request ID not found | | 500 | Server Error | Internal server error |

Asynch Status Values:

| Status | Meaning | |--------|---------| | initiated | Upload in progress | | upload_completed | Upload finished, awaiting processing | | processing | Server analyzing audio | | completed | Results ready | | failed | Processing failed |

Streaming API Errors

| Event | Description | |-------|-------------| | error | Server-side error during streaming | | warning | Non-fatal warning from server | | connect_error | WebSocket connection failed | | disconnect | Connection closed |

Rate Limit API Errors

| HTTP Status | Description | |-------------|-------------| | 401 | Unauthorized - Invalid API key | | 429 | Too Many Requests - Rate limit exceeded | | 500 | Server Error |

Error Handling

import {
  ValenceClient,
  AudioTooShortError,
  FileSizeLimitExceededError
} from 'valenceai';

try {
  const client = new ValenceClient({ apiKey: 'your_key' });
  const result = await client.discrete.emotions('audio.wav');
} catch (error) {
  if (error instanceof AudioTooShortError) {
    console.error(`Audio too short: ${error.actualDuration}s (min: ${error.minDuration}s)`);
  } else if (error instanceof FileSizeLimitExceededError) {
    console.error(`File too large: ${error.actualSizeMb.toFixed(2)} MB (max: ${error.maxSizeMb} MB)`);
  } else if (error.message.includes('API key')) {
    console.error('Authentication error:', error.message);
  } else if (error.message.includes('File not found')) {
    console.error('File error:', error.message);
  } else if (error.message.includes('API error')) {
    console.error('API error:', error.message);
  } else {
    console.error('Unexpected error:', error.message);
  }
}

Migration from v0.x

Key Changes in v1.0.5

  1. Environment Variable: VALENCE_API_KEY is now the standard (consistent naming across SDKs)
  2. Unified Client: Single ValenceClient class with nested APIs
  3. Streaming API: New WebSocket-based real-time emotion detection
  4. Rate Limiting: New API for monitoring usage
  5. Timeline Data: Asynch API now returns detailed timestamp information
  6. Helper Methods: Asynch API now includes functions for baseline analysis of emotion timeline

Updating Your Code

// Old (v0.x)
import { predictDiscreteAudioEmotion } from 'valenceai';
const result = await predictDiscreteAudioEmotion('file.wav');

// New (v1.0.0)
import { ValenceClient } from 'valenceai';
const client = new ValenceClient({ apiKey: 'your_key' });
const result = await client.discrete.emotions('file.wav');

// New streaming capability
const stream = client.streaming.connect();
stream.on('prediction', callback);
await stream.connect();

Breaking Changes

  • predictDiscreteAudioEmotion()client.discrete.emotions()
  • uploadAsyncAudio()client.asynch.upload()
  • getEmotions()client.asynch.emotions()
  • All methods now require creating a ValenceClient instance first

See CHANGELOG.md for complete migration guide.

TypeScript Support

The SDK includes comprehensive JSDoc annotations for full TypeScript IntelliSense:

import { ValenceClient } from 'valenceai';

const client: ValenceClient = new ValenceClient({ apiKey: 'your_key' });

// Full type inference and autocomplete
const result = await client.discrete.emotions('audio.wav');
// result.main_emotion is typed

Support

License

Private License © 2026 Valence Vibrations, Inc, a Delaware public benefit corporation.