npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@simplifiediq/proctoring

v1.1.2

Published

Browser-based exam proctoring SDK — face tracking, sound monitoring, live streaming, and browser lockdown.

Readme

@simplifiediq/proctoring

A browser-based exam proctoring SDK for real-time face tracking, sound monitoring, live video streaming, and browser lockdown — powered by SimplifiedIQ.

npm version License: ISC


Table of Contents


Features

  • 🎯 Face Tracking — Detect missing faces, multiple faces, gaze direction, and camera obstruction using TensorFlow.js
  • 🔊 Sound Monitoring — Ambient noise calibration, voice-activity detection, and automatic audio recording
  • 📹 Live Streaming — Real-time video/audio proctoring via Amazon IVS with screen sharing support
  • 🔒 Browser Lockdown — Fullscreen enforcement, clipboard blocking, tab closure, mouse-leave detection, and visibility-change monitoring
  • 🧩 Extension Bridge — Deeper OS-level restrictions like multi-screen detection, lockdown mode, device snapshots, and background application control (requires companion extension)

Prerequisites

  1. SimplifiedIQ Account — Sign up at simplifiediq.io
  2. Integration Feature — Your account must be configured to use the Integration feature. Contact your SimplifiedIQ admin or support to enable this.
  3. Public API Key — Obtain your public API key from the SimplifiedIQ Dashboard (Settings → Integrations → API Keys). This key is passed to the SDK constructor.

Installation

npm install @simplifiediq/proctoring

Peer Dependencies

This package depends on TensorFlow.js and Amazon IVS. If you're using npm 7+ (default since Node 16), these are installed automatically — no extra steps needed.

If you're on an older npm version or a package manager that doesn't auto-install peer dependencies (e.g., pnpm), install them manually:

npm install @tensorflow/tfjs-core @tensorflow/tfjs-backend-webgl @tensorflow-models/face-detection @mediapipe/face_detection amazon-ivs-web-broadcast

Quick Start

import { SimplifiedProctoring } from "@simplifiediq/proctoring";

// 1. Create a client with your API key and callbacks
const proctor = new SimplifiedProctoring({
  apiKey: "your-api-key",
  onNoFaceDetected: () => console.warn("No face detected!"),
  onNoFaceDetectedStop: () => console.log("Face detected again."),
  onMultipleFaceDetected: () => console.warn("Multiple faces detected!"),
  onSoundError: (msg, time) => console.warn(`Sound event: ${msg} @ ${time}`),
  onSaveSoundRecording: (blob, time) => {
    // Upload the audio blob to your server
  },
});

// 2. Initialize — validates the API key and fetches feature config
await proctor.init();

// 3. Start face tracking (requires a <video> element)
const video = document.getElementById("webcam") as HTMLVideoElement;
await proctor.startFaceTracking(video);

// 4. Start sound monitoring
await proctor.startAudioMonitoring();

// 5. Clean up when done
proctor.stopFacialTracking();
proctor.stopAudioMonitoring();

Framework Examples

React

import { useEffect, useRef, useState } from "react";
import { SimplifiedProctoring } from "@simplifiediq/proctoring";

export default function ProctoredExam() {
  const videoRef = useRef<HTMLVideoElement>(null);
  const canvasRef = useRef<HTMLCanvasElement>(null);
  const [proctor] = useState(
    () =>
      new SimplifiedProctoring({
        apiKey: "your-api-key",
        mirrored: true, // Default: aligns eye-tracking with mirrored video stream
        onNoFaceDetected: () => console.warn("No face detected"),
        onNoFaceDetectedStop: () => console.log("Face detected again"),
        onMultipleFaceDetected: () => console.warn("Multiple faces!"),
        onSoundError: (msg) => console.warn("Sound:", msg),
        onSaveSoundRecording: (blob) => {
          // Upload blob to your server
        },
      })
  );

  useEffect(() => {
    let mounted = true;

    async function start() {
      await proctor.init();
      if (!mounted || !videoRef.current) return;

      // Pass video and optional canvas for green face boxes
      await proctor.startFaceTracking(videoRef.current, canvasRef.current || undefined);
      await proctor.startAudioMonitoring();
    }

    start();

    return () => {
      mounted = false;
      proctor.stopFacialTracking();
      proctor.stopAudioMonitoring();
    };
  }, [proctor]);

  return (
    <div style={{ position: "relative", width: 640, height: 480 }}>
      {/* Mirror the video visually */}
      <video 
        ref={videoRef} 
        autoPlay 
        muted 
        style={{ width: "100%", height: "100%", transform: "scaleX(-1)" }} 
      />
      {/* Overlay canvas for green proctoring boxes */}
      <canvas 
        ref={canvasRef} 
        style={{ position: "absolute", top: 0, left: 0, width: "100%", height: "100%", pointerEvents: "none" }} 
      />
    </div>
  );
}

Visual Feedback & Mirroring

Face Boxes

By default, the SDK processes face detection on a hidden background canvas. If you want to show the green detection boxes to the user (e.g., for a practice session), provide a <canvas> element to startFaceTracking.

Ensure the canvas is placed directly on top of the video element (using absolute positioning) and has pointer-events: none so it doesn't block interactions.

Mirroring (Flip)

Most webcam applications mirror the video (horizontally flip it) to provide a more natural user experience. If you mirror your video via CSS (e.g., transform: scaleX(-1)), you should set mirrored: true in the SDK config (this is the default).

When mirrored: true is set:

  1. Detection Boxes are automatically flipped to match your mirrored video.
  2. Eye Tracking logic is adjusted so that looking "Left" from the user's perspective is correctly identified, even if the video is flipped.

Mirroring (Flip)

Most webcam applications mirror the video (horizontally flip it) to provide a more natural user experience. You can achieve this easily by applying a CSS transform to your <video> element:

video {
  transform: scaleX(-1);
}

The SDK's internal face detection logic automatically accounts for this visual flip to ensure tracking remains accurate.

Vue 3

<script setup lang="ts">
import { ref, onMounted, onBeforeUnmount } from "vue";
import { SimplifiedProctoring } from "@simplifiediq/proctoring";

const videoEl = ref<HTMLVideoElement>();

const proctor = new SimplifiedProctoring({
  apiKey: "your-api-key",
  onNoFaceDetected: () => console.warn("No face detected"),
  onNoFaceDetectedStop: () => console.log("Face detected again"),
  onMultipleFaceDetected: () => console.warn("Multiple faces!"),
  onSoundError: (msg) => console.warn("Sound:", msg),
  onSaveSoundRecording: (blob) => {
    // Upload blob to your server
  },
});

onMounted(async () => {
  await proctor.init();
  if (!videoEl.value) return;

  await proctor.startFaceTracking(videoEl.value);
  await proctor.startAudioMonitoring();
});

onBeforeUnmount(() => {
  proctor.stopFacialTracking();
  proctor.stopAudioMonitoring();
});
</script>

<template>
  <video ref="videoEl" autoplay muted style="width: 320px" />
</template>

Angular

// proctored-exam.component.ts
import { Component, ViewChild, ElementRef, OnInit, OnDestroy } from "@angular/core";
import { SimplifiedProctoring } from "@simplifiediq/proctoring";

@Component({
  selector: "app-proctored-exam",
  template: `<video #webcam autoplay muted [style.width.px]="320"></video>`,
})
export class ProctoredExamComponent implements OnInit, OnDestroy {
  @ViewChild("webcam", { static: true }) videoRef!: ElementRef<HTMLVideoElement>;

  private proctor = new SimplifiedProctoring({
    apiKey: "your-api-key",
    onNoFaceDetected: () => console.warn("No face detected"),
    onNoFaceDetectedStop: () => console.log("Face detected again"),
    onMultipleFaceDetected: () => console.warn("Multiple faces!"),
    onSoundError: (msg) => console.warn("Sound:", msg),
    onSaveSoundRecording: (blob) => {
      // Upload blob to your server
    },
  });

  async ngOnInit() {
    await this.proctor.init();
    await this.proctor.startFaceTracking(this.videoRef.nativeElement);
    await this.proctor.startAudioMonitoring();
  }

  ngOnDestroy() {
    this.proctor.stopFacialTracking();
    this.proctor.stopAudioMonitoring();
  }
}

API Reference

Initialization

constructor(config: ProctoringConfig)

Creates a new proctoring client.

const proctor = new SimplifiedProctoring({
  apiKey: "your-api-key",
  onFaceError: (message, frame, time) => { /* ... */ },
});

init(): Promise<void>

Initializes the SDK by verifying the API key and fetching the organization's feature permissions. Must be called before using any other method.

await proctor.init();

Throws if the API key is invalid.

isInitialized(): boolean

Returns true if init() has completed successfully.

getProctoringFeatures(): SchoolFeatureConfig | undefined

Returns the organization's feature configuration (which proctoring features are enabled).


Face Tracking

startFaceTracking(videoElement: HTMLVideoElement): Promise<void>

Starts real-time facial detection using the user's webcam. Automatically requests camera permissions, attaches the media stream to the video element, and begins detection.

const video = document.getElementById("webcam") as HTMLVideoElement;
await proctor.startFaceTracking(video);

Detects: | Event | Start Callback | Stop Callback | |---|---|---| | Camera blocked / dark room | onCameraBlocked | onCameraBlockedStop | | No face in frame | onNoFaceDetected | onNoFaceDetectedStop | | Multiple faces | onMultipleFaceDetected | onMultipleFaceDetectedStop | | Face turned away | onFaceAwayFromScreen | onFaceAwayFromScreenStop | | Eyes looking away for 5s+ | onEyeballsDisplaced | onEyeballsDisplacedStop |

Requires the eyesAndFaceTracking feature to be enabled for your organization.

stopFacialTracking(): void

Stops facial tracking and releases camera resources.


Sound Monitoring

startAudioMonitoring(): Promise<void>

Initializes the microphone, calibrates ambient noise, and begins monitoring for voice activity. When sound above the ambient baseline is detected, recording starts automatically and the audio blob is delivered via onSaveSoundRecording.

await proctor.startAudioMonitoring();

Requires the soundTracking feature to be enabled for your organization.

stopAudioMonitoring(): void

Stops audio monitoring and releases the microphone.


Live Streaming

All live streaming methods require the canSchoolDoLiveProctoring and realTimeViewingEnabled features to be enabled.

createStageArn(body: CreateStageArnInput): Promise<CreateStageArnResponse>

Creates an Amazon IVS Stage ARN for a proctoring session.

const stage = await proctor.createStageArn({
  proctoringAssessmentId: "assessment-123",
  saveRealTimeViewing: true,
});

createParticipantToken(body: CreateParticipantTokenInput): Promise<CreateParticipantTokenResponse>

Generates participant and chat tokens for joining a live stage.

const tokens = await proctor.createParticipantToken({
  proctoringAssessmentId: "assessment-123",
  participantId: "student-456",
});

joinLiveStage(options: JoinStageOptions): Promise<void>

Connects the participant to a live video/audio stage.

await proctor.joinLiveStage({
  token: tokens.ivsParticipantToken,
  audioDeviceId: "mic-device-id",
  videoDeviceId: "camera-device-id",
  onConnectionStateChange: (connected) => console.log("Connected:", connected),
  onParticipantJoined: ({ participant, streams }) => { /* render remote video */ },
  onError: (err) => console.error(err),
});

leaveLiveStage(): Promise<void>

Disconnects from the live stage.

muteLiveMic(muted: boolean): Promise<void>

Mutes or unmutes the local microphone.

switchLiveCamera(cameraDeviceId: string): Promise<void>

Switches to a different camera during the live session.

startLiveScreenShare(): Promise<void>

Starts sharing the participant's screen on the stage.

Requires canSchoolDoScreenSharing and screenSharingEnabled to be enabled.

stopLiveScreenShare(cameraDeviceId: string): Promise<void>

Stops screen sharing and reverts to the specified camera.

updateLiveLocalMediaTracks(videoDeviceId, audioDeviceId, videoType): Promise<void>

Hot-swaps video and audio input devices without leaving the stage.

await proctor.updateLiveLocalMediaTracks(
  "new-camera-id",
  "new-mic-id",
  DeviceType.CAMERA
);

Browser Monitoring

isExtensionInstalled(): boolean

Returns true if the SimplifiedIQ proctoring browser extension is detected.

enableProctoringExtension(): void

Activates the proctoring extension. Throws if the extension is not installed.

disableProctoringExtension(): void

Deactivates the proctoring extension.

closeOtherTabs(): void

Closes all browser tabs except the current one via the extension.

Requires the closeOpenTabs feature and the browser extension.

disableRightClickAndClipboard(): void

Disables right-click context menu and clipboard operations via the extension.

enableLockdownMode(): void

Activates the OS-level lockdown mode via the extension. Prevents exiting the browser window and disables system-level shortcuts.

Requires the lockdownApp feature and the browser extension.

checkMultiScreenDetection(): void

Triggers a check to see if the user has multiple monitors connected.

Requires the onlyOneScreen feature and the browser extension.

takeDeviceSnapshot(): void

Requests a full-screen screenshot from the operating system via the extension.

Requires the deviceSnapshot feature and the browser extension.

disableExternalDevices(): void

Requests the OS to disable unauthorized external devices (e.g., secondary keyboards/mice).

Requires the disableExternalDevices feature and the browser extension.

stopBackgroundApps(): void

Requests the OS to suspend or close non-essential background applications.

Requires the stopAllBackgroundApps feature and the browser extension.

disableCopyAndPaste(): void

Blocks copy, cut, and paste events on the document.

enableCopyAndPaste(): void

Re-enables copy, cut, and paste events.

openFullscreen(element: HTMLElement): Promise<void>

Enters fullscreen mode for the specified element. Cross-browser compatible.

await proctor.openFullscreen(document.documentElement);

closeFullscreen(): Promise<void>

Exits fullscreen mode.

startMouseOutDetection(callback: () => any): void

Fires the callback whenever the mouse cursor leaves the browser window.

const handleMouseOut = () => console.warn("Mouse left the window");
proctor.startMouseOutDetection(handleMouseOut);

stopMouseOutDetection(callback: () => any): void

Removes the mouse-leave listener. Pass the same callback reference used in startMouseOutDetection.

startVisibilityChangeDetection(callback: () => any): void

Fires the callback when the page visibility changes (e.g., user switches tabs).

const handleVisibility = () => {
  if (document.hidden) console.warn("Tab is hidden!");
};
proctor.startVisibilityChangeDetection(handleVisibility);

stopVisibilityChangeDetection(callback: () => any): void

Removes the visibility-change listener.


Utilities

getUserDevices(): Promise<{ videoDevices: MediaDeviceInfo[], audioDevices: MediaDeviceInfo[] }>

Enumerates available cameras and microphones. Useful for building a device-selection UI.

const { videoDevices, audioDevices } = await proctor.getUserDevices();

Configuration

The ProctoringConfig object is passed to the constructor:

interface ProctoringConfig {
  apiKey: string;
  mirrored?: boolean; // Default: true. Flips detection boxes and eye logic.

  // Face detection callbacks
  onFaceError?: (message: string, frame: Blob, time: number) => void;
  onFaceErrorStop?: (message: string, frame: Blob, time: number) => void;

  // Extension callbacks
  onMultiScreenDetected?: (data: any) => void;
  onSnapshotTaken?: (data: any) => void;
  
  // ... other callbacks ...
}

| Callback | Trigger | |---|---| | onFaceError | Any facial detection error begins (generic catch-all) | | onFaceErrorStop | Any facial detection error resolves | | onCameraBlocked | Camera is covered or room is too dark | | onCameraBlockedStop | Camera is uncovered | | onMultipleFaceDetected | More than one face is detected | | onMultipleFaceDetectedStop | Back to a single face | | onNoFaceDetected | No face is visible in the frame | | onNoFaceDetectedStop | A face is visible again | | onFaceAwayFromScreen | The face is turned away from the screen | | onFaceAwayFromScreenStop | The face is back on screen | | onEyeballsDisplaced | Eyes have been looking away for 5+ seconds | | onEyeballsDisplacedStop | Eyes return to center | | onSoundError | A sound-related event (voice detected, mic muted, etc.) | | onSoundErrorStop | Sound event resolved | | onSaveSoundRecording | A recorded audio clip is ready (delivered as a Blob) | | onMultiScreenDetected | The extension detected multiple monitors connected | | onSnapshotTaken | The extension successfully captured a device snapshot |


TypeScript Types

All types are exported from the package entry point:

import type {
  ProctoringConfig,
  SchoolFeatureConfig,
  JoinStageOptions,
  CreateStageArnInput,
  CreateStageArnResponse,
  CreateParticipantTokenInput,
  CreateParticipantTokenResponse,
  MonitoringEvent,
  MonitoringData,
  FaceTrackingReturn,
  ErrorCallback,
  FacialDetectorErrorMessage,
  FacialDetectorErrorMessageStop,
} from "@simplifiediq/proctoring";

import { DeviceType } from "@simplifiediq/proctoring";
// DeviceType.CAMERA | DeviceType.MIC | DeviceType.SCREEN

Browser Extension

Some browser monitoring features (closing tabs, disabling right-click, lockdown) require the SimplifiedIQ Proctoring Browser Extension.

The SDK communicates with the extension via an Extension Bridge. You can handle extension events using callbacks:

const proctor = new SimplifiedProctoring({
  apiKey: "your-api-key",
  onMultiScreenDetected: (data) => {
    alert("Multiple screens detected! Please disconnect other monitors.");
  },
  onSnapshotTaken: (data) => {
    console.log("OS Snapshot captured:", data);
  }
});

if (proctor.isExtensionInstalled()) {
  proctor.enableProctoringExtension();
  proctor.enableLockdownMode();
}

Browser Support

| Feature | Chrome | Firefox | Safari | Edge | |---|:---:|:---:|:---:|:---:| | Face Tracking | ✅ | ✅ | ✅ | ✅ | | Sound Monitoring | ✅ | ✅ | ✅ | ✅ | | Live Streaming (IVS) | ✅ | ✅ | ⚠️ | ✅ | | Fullscreen | ✅ | ✅ | ✅ | ✅ | | Browser Extension | ✅ | ❌ | ❌ | ✅ |

⚠️ Safari has limited support for Amazon IVS Web Broadcast. See the IVS documentation for details.


License

ISC © SimplifiedIQ