npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@onereachai/voice-sdk

v1.0.0

Published

Voice-enabled task management SDK with firefly-animated orb UI, real-time speech transcription, AI classification, and RAG knowledge system

Readme

@onereach/voice-sdk

A comprehensive voice-enabled task management SDK with a beautiful firefly-animated orb UI, real-time speech transcription, AI classification, and RAG knowledge system.

Features

Voice Input

  • Real-time Speech - OpenAI Realtime API with WebSocket streaming
  • Automatic Fallback - Falls back to Whisper API if Realtime fails
  • Voice Activity Detection - Server-side VAD with configurable thresholds

Task Management

  • Actions - Classifiable intents with timeouts and retries
  • Queues - Named execution threads with concurrency control
  • Agents - Task resolvers with priority-based selection
  • Router - Rules engine for routing tasks to queues
  • AI Classification - OpenAI-powered intent recognition

Knowledge System (RAG)

  • Chunking - Multiple strategies (fixed, paragraph, sentence, semantic)
  • Vector Search - Cosine similarity with in-memory store
  • Answer Generation - LLM-synthesized answers from knowledge

UI Components (React)

  • VoiceOrb - Animated voice input button with firefly theme
  • TaskHUD - Heads-up display for current/recent tasks
  • QueuePanel - Queue monitoring and management panel

Electron Integration

  • Global Shortcuts - System-wide keyboard shortcuts
  • System Tray - Menu bar integration
  • Floating Orb Window - Always-on-top voice input
  • AppleScript - macOS automation
  • Input Control - Mouse and keyboard automation

Installation

npm install @onereach/voice-sdk

Quick Start

React Component

import { VoiceOrb } from '@onereach/voice-sdk/react'

function App() {
  return (
    <VoiceOrb
      apiKey="sk-..."
      theme="firefly"  // Organic bioluminescent glow
      onTranscript={(text) => console.log('Heard:', text)}
    />
  )
}

Electron Integration

// main.js
const { initialize, showOrb, hideOrb } = require('@onereach/voice-sdk/electron')

app.whenReady().then(() => {
  initialize({
    toggleShortcut: 'CommandOrControl+Shift+O',
    showInTray: true,
  })
})

Core SDK

import { createVoiceTaskSDK } from '@onereach/voice-sdk'

const sdk = createVoiceTaskSDK({
  apiKey: 'sk-...',
  enableKnowledge: true,
  enableClassification: true,
})

// Register an action
sdk.registerAction({
  name: 'send_email',
  description: 'Send an email to someone',
  parameters: ['recipient', 'subject', 'body'],
})

// Submit transcript for classification
const result = await sdk.submit('send an email to John about the meeting')
console.log(result.action) // 'send_email'
console.log(result.params) // { recipient: 'John', subject: 'the meeting', ... }

VoiceOrb Themes

Firefly Theme (Default)

Organic bioluminescent glow with gentle floating motion, inspired by fireflies.

  • Green glow in idle state
  • Yellow/gold when actively listening
  • Orange when processing
  • Randomized glow intensity for organic feel
  • Gentle floating animation
<VoiceOrb apiKey="..." theme="firefly" />

Default Theme

Classic purple pulse animation with volume-reactive glow.

<VoiceOrb apiKey="..." theme="default" color="#6366f1" />

API Reference

VoiceOrb Props

| Prop | Type | Default | Description | |------|------|---------|-------------| | apiKey | string | required | OpenAI API key | | size | number | 80 | Orb size in pixels | | theme | 'firefly' \| 'default' | 'firefly' | Visual theme | | color | string | '#6366f1' | Primary color (default theme) | | showTranscript | boolean | true | Show transcript below orb | | onTranscript | (text: string) => void | - | Transcript callback | | onError | (error: Error) => void | - | Error callback | | preferredBackend | 'realtime' \| 'whisper' | 'realtime' | Speech backend | | language | string | 'en' | Language code | | disabled | boolean | false | Disabled state |

SDK Configuration

interface VoiceTaskSDKConfig {
  apiKey: string
  language?: string
  preferredBackend?: 'realtime' | 'whisper'
  enableKnowledge?: boolean
  enableClassification?: boolean
  classifier?: {
    type: 'ai' | 'custom' | 'hybrid'
    model?: string
    temperature?: number
  }
  knowledge?: {
    chunkSize?: number
    chunkOverlap?: number
    embeddingModel?: string
  }
}

SDK Methods

interface VoiceTaskSDK {
  // Voice
  voice: {
    start(): Promise<void>
    stop(): Promise<void>
    getState(): VoiceState
  }
  
  // Actions & Classification
  registerAction(action: Action): void
  submit(transcript: string): Promise<ClassificationResult>
  
  // Queues
  createQueue(name: string, options?: QueueOptions): Queue
  
  // Agents
  registerAgent(agent: Agent): void
  
  // Knowledge (RAG)
  addKnowledge(source: KnowledgeSource): Promise<string>
  searchKnowledge(query: string): Promise<SearchResult[]>
  askKnowledge(question: string): Promise<Answer>
  
  // Lifecycle
  destroy(): void
}

Electron Handlers

The SDK provides system-level handlers for Electron apps:

import { registerHandlers } from '@onereach/voice-sdk/electron'

// Available handlers:
// - activeApp: Get active application info
// - applescript: Run AppleScript (macOS)
// - filesystem: File operations
// - keyboard: Keyboard automation
// - mouse: Mouse automation
// - screenshot: Screen capture
// - spotlight: Spotlight search (macOS)
// - terminal: Terminal commands

Architecture

@onereach/voice-sdk/
├── src/
│   ├── index.ts           # Main entry point
│   ├── createSDK.ts       # SDK factory
│   ├── core/              # Core components
│   │   ├── actionStore    # Action registry
│   │   ├── queueManager   # Queue management
│   │   ├── agentRegistry  # Agent registry
│   │   ├── taskStore      # Task state
│   │   ├── router         # Task routing
│   │   ├── dispatcher     # Task execution
│   │   └── hooks          # Lifecycle hooks
│   ├── classifier/        # AI classification
│   ├── voice/             # Voice services
│   ├── knowledge/         # RAG system
│   ├── ui/react/          # React components
│   └── electron/          # Electron integration

License

MIT - OneReach.ai