npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

interview-widget

v2.0.4

Published

Advanced React interview widget with STT, TTS, camera access, and ML-powered analysis

Readme

Interview Widget

A modern React interview experience widget with built‑in timer flow, STT (speech‑to‑text), and TTS (text‑to‑speech). It can be embedded via NPM or a CDN and configured through a single provider.

Features

  • Video capture with camera integration
  • Responsive design with mobile support
  • TypeScript support with full type definitions
  • CDN-ready for easy integration into any website
  • Q&A structured interview flow
  • Advanced Timer System with phase management
  • Text-to-Speech integration for transcribing
  • Speech-to-Text integration for voice answers
  • Automatic phase transitions (thinking → answering → editing)
  • Time validation before starting new questions

Quick Start

1) Environment Setup

The Interview Widget uses encryption for session storage integrity. Set up your encryption secret based on your environment:

Vite React

# .env
VITE_IW_SECRET=your-secure-encryption-seed-here

Next.js

# .env.local
NEXT_PUBLIC_IW_SECRET=your-secure-encryption-seed-here

Vanilla JS / Custom Framework

<script>
  // Define the secret before loading your widget
  window.__IW_SECRET__ = "your-secure-encryption-seed-here";
</script>

2) Install (React apps)

npm install interview-widget

Minimal usage:

import React from "react";
import { InterviewWidget, InterviewWidgetProvider } from "interview-widget";
import "interview-widget/style.css";

export default function App() {
  return (
    <InterviewWidgetProvider
      config={{
        api: {
          baseUrl: "/api" // Your backend endpoint
          authToken: getTokenFromYourAuthSystem(), // Get from secure backend
        },
        interview: {
          stt: {
            provider: "groq",
            model: "whisper-large-v3-turbo",
            language: "en",
          },
          tts: { provider: "piper" },
          timers: {
            thinkingDuration: 30,
            answeringDuration: 120,
            editingDuration: 30,
          },
        },
      }}
    >
      <InterviewWidget
        interviewId="your_interview_id"
        title="Interview for JS developer"
        brandName="your_brand_name"
        className="iw-rounded-none iw-shadow-none"
        onInterviewEnd={() => {
          console.log("🎉🎉 Interview ended 🎉🎉");
        }}
      />
    </InterviewWidgetProvider>
  );
}
<!-- React 18 UMD -->
<script
  crossorigin
  src="https://unpkg.com/react@18/umd/react.production.min.js"
></script>
<script
  crossorigin
  src="https://unpkg.com/react-dom@18/umd/react-dom.production.min.js"
></script>

<!-- Styles + Widget UMD -->
<link
  rel="stylesheet"
  href="https://unpkg.com/interview-widget@latest/dist/widget.css"
/>
<script src="https://unpkg.com/interview-widget@latest/dist/widget.umd.js"></script>

<div id="interview-widget-container"></div>
<script>
  document.addEventListener("DOMContentLoaded", function () {
    const { InterviewWidget, InterviewWidgetProvider } = window.InterviewWidget;

    const app = React.createElement(
      InterviewWidgetProvider,
      {
        config: {
          api: { baseUrl: "/api" },
          interview: {
            timers: {
              thinkingDuration: 30,
              answeringDuration: 120,
              editingDuration: 30,
            },
            stt: {
              provider: "groq",
              model: "whisper-large-v3-turbo",
              language: "en",
            },
            tts: { provider: "piper" },
          },
        },
      },
      React.createElement(InterviewWidget, { interviewId="your_interview_id",brandName="your_brand_name", title: "Developer Interview",  onInterviewEnd={() => {}} }
    );

    ReactDOM.render(app, document.getElementById("interview-widget-container"));
  });
</script>

Public API

InterviewWidget Props

| Prop | Type | Default | Description | | ----------------------- | ------------ | ------------- | --------------------------------------------------------------------------- | | interviewId | string | Required | Unique interview identifier used for backend API calls | | title | string | "Interview" | Title displayed in the interview header | | brandName | string | "Novara" | Brand name displayed in the widget header | | onInterviewEnd | () => void | undefined | Called when the interview completes or the user exits | | onInterviewDisqualify | () => void | undefined | Called when the interview is disqualified (e.g., due to cheating detection) | | className | string | "" | Additional CSS classes applied to the outer widget container |

Wrap your app (or the widget) to provide configuration. See full config reference below.

<InterviewWidgetProvider config={{ api: { baseUrl: "/api" } }}>
  <InterviewWidget />
</InterviewWidgetProvider>

Configuration reference (Provider)

Pass this object to InterviewWidgetProvider via the config prop. Tables list all keys, types, and defaults.

Top-level

| Prop | Type | Default | Description | | --------- | -------- | ----------------------- | ---------------------------------------------- | | api | object | see below | Backend/API configuration | | ui | object | see below | UI customization tokens | | interview | object | see below | Interview behavior settings (timers, STT, TTS) |

API

| Prop | Type | Default | Description | | ----------- | -------- | -------------------------- | ------------------------------------------------------ | | baseUrl | string | "/api" | Base URL for backend endpoints | | authToken | string | undefined | Optional bearer token appended as Authorization header | | retryConfig | object | see below | Retry policy for fetch calls |

Retry Config

| Prop | Type | Default | Description | | --------- | -------------------------- | ------------- | ------------------------------------------ | | attempts | number | 3 | Number of retry attempts | | backoff | "fixed" \| "exponential" | "exponential" | Backoff strategy | | baseDelay | number (ms) | 1000 | Base delay between retries in milliseconds |

UI

| Prop | Type | Default | Description | | ------------ | -------------- | --------- | -------------------------------------- | | baseColor | string (hex) | "#3B82F6" | Primary brand color used by the widget | | borderRadius | string (CSS) | "8px" | Global corner radius for components |

Interview

| Prop | Type | Default | Description | | ---------- | -------- | -------------------------------- | ------------------------------------------ | | timers | object | see below | Per-phase and global timing configuration | | stt | object | see below | Speech-to-Text provider settings | | tts | object | see below | Text-to-Speech provider settings | | proctoring | object | see below | Proctoring settings and cheating detection |

Timers

| Prop | Type | Default | Description | | -------------------------- | -------- | ------- | --------------------------------------------------------- | | thinkingDuration | number | 30s | Timebox for the Thinking phase | | answeringDuration | number | 120s | Timebox for the Answering phase | | editingDuration | number | 30s | Timebox for the Editing phase | | totalInterviewDuration | number | 600s | Overall interview time cap | | minimumTimeForNextQuestion | number | 120s | Minimum time required to allow starting the next question |

STT (Speech-To-Text)

| Prop | Type | Default | Description | | -------- | ------------------------------------ | ------------------------ | -------------------------------------- | | provider | "groq" \| "deepgram" | "groq" | STT vendor to use | | model | "whisper-large-v3-turbo" \| string | "whisper-large-v3-turbo" | STT model identifier | | language | string | "en" | Language code passed to the STT engine |

TTS (Text-To-Speech)

| Prop | Type | Default | Description | | -------- | --------- | ------- | ----------------- | | provider | "piper" | "piper" | TTS vendor to use |

Proctoring

| Prop | Type | Default | Description | | ------------------- | --------- | ------- | --------------------------------------------------------- | | enabled | boolean | true | Enable/disable proctoring and cheating detection features | | gazeAnalysisEnabled | boolean | true | Enable/disable gaze tracking and analysis | | showControls | boolean | false | Show/hide proctoring control panel | | showEngagementBar | boolean | true | Show/hide engagement metrics bar | | showLandmarks | boolean | false | Show/hide facial landmark visualization |

How it works

The widget composes several parts:

  • TimerService controls phases and timeboxes
  • TTS speaks the question in the Reading phase
  • STT records and transcribes the answer in the Answering/Transcribing phases
  • A resilient API client hits your backend to fetch the next question and submit answers

Backend API Interface (Version 2)

The Interview Widget Version 2 introduces a more structured and resource-oriented API. All V2 endpoints are prefixed with /v2.

1. Get Interview Configuration

GET /v2/interviews/{interviewId}/config

Fetches the configuration and metadata for a specific interview session.

2. Generate Next Question

POST /v2/interviews/{interviewId}/next-question

Request Body

{
  "interview_id": "string",
  "is_interview_done": false
}

Response Body

{
  "success": true,
  "data": {
    "interview_id": "string",
    "qna_id": "string",
    "question": "What is your experience with React?",
    "question_audio_data_base64": null,
    "audio_length_in_milliseconds": 0,
    "estimated_answering_duration": "00:00:30"
  }
}

3. Submit Answer

POST /v2/interviews/{interviewId}/submit-answer

Request Body

{
  "qna_id": "string",
  "answer_text": "string"
}

4. Screenshot Upload (Signed URL)

POST /v2/interviews/{interviewId}/assets/upload-url

Request Body

{
  "filename": "string",
  "mime_type": "image/jpeg",
  "asset_type": "screenshot"
}

5. Confirm Asset Upload

POST /v2/interviews/assets/{assetId}/confirm

Called after successfully uploading the file to the signed URL.


Changelog

[v2.0.0] - 2025-12-22

Added

  • Added API_VERSIONS support (v1, v2).
  • New endpoint: GET /v2/interviews/{interviewId}/config for fetching session configuration.
  • New endpoint: POST /v2/interviews/{interviewId}/submit-answer for asynchronous answer submission.
  • New endpoint: POST /v2/interviews/assets/{assetId}/confirm to confirm asset uploads.
  • New endpoint: POST /v2/interviews/{interviewId}/transcribe-answer for STT processing.
  • Implementation of confirmScreenshotUpload in InterviewAPI.
  • Support for asset_type in screenshot upload requests.

Changed

  • Refactored all endpoints to follow a resource-oriented structure under /v2/interviews/.
  • Question Generation: Moved from /v1/questions/next to /v2/interviews/{interviewId}/next-question.
  • Payload Change: Question generation payload simplified to only interview_id and is_interview_done.
  • Screenshot Upload: Moved from /v1/interview/{interviewId}/content to /v2/interviews/{interviewId}/assets/upload-url.
  • Updated InterviewAPI to use the new API_ENDPOINTS configuration.

Removed

  • Removed legacy fields (qna_id, question, answer, answer_duration) from the generateQuestion request body in V2 (as answer submission is now a separate step).

Timer phases

idle → fetching_question → reading_question → thinking → answering → transcribing → editing → submitting → completed

You’ll see a Start button, a Next Phase button during active phases, a live countdown, and a completion screen.