teachable-machine.js
v2.0.2
Published
A robust and optimized JavaScript library for integrating Google's Teachable Machine models, supporting various image sources and providing efficient classification capabilities.
Maintainers
Keywords
Readme
teachable-machine.js
Ultra‑fast, production‑ready Teachable Machine inference for Node.js with smart RAM/Disk I/O, unified Image & Video classification, and optional native backend acceleration.
Repo: https://github.com/nixaut-codelabs/teachable-machine.js • NPM: teachable-machine.js
Highlights
- Ultra fast classification — batched inference, turbo mode, and optional native backend (
@tensorflow/tfjs-node). - Smart resource management — in‑memory (RAM) pipeline by default, automatic disk fallback, strict cleanup guarantees.
- Video classification (FFmpeg) — sample frames evenly via FFmpeg; supports MP4, GIF, and more.
- Backend control — choose between pure
tfjsandtfjs-node(native TensorFlow) for best performance. - Detailed parameters & outputs — timings, backend, I/O mode (RAM/Disk), size limits, fallback, and cleanup status.
- Unified API — classify images and videos through a single, ergonomic interface.
- Flexible inputs — URLs, local paths, Buffers/Uint8Arrays, data URIs, and base64 strings.
Author: Nixaut • License: Apache‑2.0
Installation
Install from NPM:
npm i teachable-machine.js
# or
bun add teachable-machine.jsThis library relies on:
@tensorflow/tfjs(installed transitively)sharpfor image decoding/resizingffmpeg-staticfor bundled FFmpeg (or use system FFmpeg via PATH)
Optional (highly recommended for speed):
npm i @tensorflow/tfjs-node
# or
bun add @tensorflow/tfjs-nodeNote: When you pass
backend: 'tfjs-node'the library will try to use the native backend (tensorflow) for a large CPU speedup.
Quick Start
import TeachableMachine from 'teachable-machine.js';
const tm = await TeachableMachine.create({
modelUrl: 'https://teachablemachine.withgoogle.com/models/your-model-id/',
saveToDir: 'model', // optional: cache locally
loadFrom: 'auto', // 'auto'|'dir'
warmup: true, // run one warmup forward pass
ioMode: 'ram', // 'ram' (default) | 'disk'
backend: 'tfjs-node', // 'tfjs' (default) | 'tfjs-node'
preprocessUseWorkers: true // offload sharp preprocess to Worker Threads
});
// Single image
const img = await tm.classify({ input: 'local_or_url_or_buffer.jpg', mediaType: 'image' });
console.log(img.predictions);
// Batch images
const imgs = await tm.classifyImages({ images: ['1.jpg', '2.jpg'], batchSize: 2 });
console.log(imgs.results[0].predictions);
// Single video (mp4/gif)
const vid = await tm.classify({ input: 'video.mp4', mediaType: 'video', frames: 8, turboMode: true });
console.log(vid.aggregate.predictions);
// Batch videos
const vids = await tm.classifyVideos({ videos: ['a.mp4', 'b.gif'], frames: 8 });
console.log(vids[0].aggregate.predictions);
// Mixed: images + videos together
const mixed = await tm.classify({
input: { images: ['1.jpg', '2.jpg'], videos: ['a.mp4', 'b.gif'] },
frames: 8,
turboMode: true
});
console.log(mixed.images?.results?.length, mixed.videos?.length);
tm.dispose();CLI (tmjs)
After installation, you can quickly try it from the terminal:
# Single image
tmjs --model https://teachablemachine.withgoogle.com/models/XXX/ image.jpg
# Video + GIF with turbo and frame count
tmjs --model ./model --media video --frames 8 --turbo video.mp4 gif.gifArguments:
--model <url|dir>: Model source (URL or local directory)--backend tfjs|tfjs-node: Backend selection--io ram|disk: I/O mode--media image|video|auto: Input type routing--frames N,--topK K,--maxBytes BYTES,--turbo
Output is printed as JSON to stdout; suitable for CI and automation.
Why this library?
- Ultra fast classification
- Image turbo batching and video turbo mode (preprocess tensors + single batched forward).
- Optional
@tensorflow/tfjs-nodebackend for native CPU acceleration.
- Smart resource management
- Default in‑RAM pipeline avoids disk I/O.
- Automatic fallback to disk when RAM extractor yields no frames.
- Guaranteed temp cleanup on success and failure.
- Optional Worker Threads (
preprocessUseWorkers: true) separate preprocessing from the main thread.
- Video classification (FFmpeg)
- Frame sampling via FFmpeg with robust single‑pass PNG pipeline.
- GIFs supported the same way.
- Backend control
backend: 'tfjs' | 'tfjs-node'on model creation.
- Detailed diagnostics
- Outputs include backend, timings for each stage, I/O mode, fallback flag, sizeBytes/maxBytes.
API
TeachableMachine.create(options) → Promise<TeachableMachine>
Options:
modelUrl?: string— Teachable Machine base URL (ends with/).modelDir?: string— directory containing a cached model.loadFrom?: 'auto'|'dir'— auto prefer local dir when available.saveToDir?: string— if provided, downloads and caches the model locally.warmup?: boolean— run one forward pass on zeros (default true).ioMode?: 'ram'|'disk'— RAM mode uses in‑memory pipeline; disk uses temp files.backend?: 'tfjs'|'tfjs-node'— select JS vs native backend at init.preprocessUseWorkers?: boolean— run sharp-based preprocessing in Worker Threads (keeps CPU-heavy work off the main thread).
Returns an instance with methods below.
Image classification
classifyImages({ images, topK?, centerCrop=true, resizeOnCPU=true, batchSize? })images: single input or array (string | Buffer | Uint8Array | data URI | base64).- Returns either a single detailed result or a batch summary
{ count, timings, results }.
Video classification
classifyVideos({ videos, frames=10, topK?, centerCrop=true, resizeOnCPU=true, turboMode=false, extractionConcurrency?, preprocessConcurrency?, maxConcurrent=2, maxBytes=10*MB })videos: single input or array.- Returns per‑video detailed outputs with frame predictions and
aggregate.predictions.
Unified classification
classify({ input, mediaType='auto', frames=10, topK?, centerCrop=true, resizeOnCPU=true, turboMode=false, extractionConcurrency?, preprocessConcurrency?, maxConcurrent=2, maxBytes=10*MB, batchSize? })- If
inputis an array or scalar, the route is chosen bymediaTypeandframes. - If
inputis{ images, videos }, both branches run and return{ images, videos }.
- If
Compatibility helper
classifyBatch({ imageUrls, ... })— original batch images API.batchImageClassify(opts)— alias toclassifyBatch().
Lifecycle
dispose()— free model resources.
TypeScript
The package ships with a .d.ts type file. Key types:
CreateOptionsImageResult,BatchImageResultVideoResult,FramePrediction
Example:
import TeachableMachine, { CreateOptions, VideoResult } from 'teachable-machine.js';
const opts: CreateOptions = { backend: 'tfjs-node', preprocessUseWorkers: true };
const tm = await TeachableMachine.create(opts);I/O Modes, Size Limits, and Cleanup
ioMode: 'ram' | 'disk'on.create()- RAM: No temp files; streams media into memory buffers.
- Disk: Uses temp files; always cleaned up in
finally.
- Size limits
maxByteson video classification to prevent oversized inputs.
- Output
ioobject{ mode, fallbackToDisk, tempCleaned, sizeBytes, maxBytes }for audits & debugging.
Performance tips
- Use
backend: 'tfjs-node'for large CPU speedups (server/desktop). - Enable
turboModefor videos to preprocess frames and run a single batched forward pass. - Use
batchSizeon images for better throughput. - Tune
extractionConcurrencyandpreprocessConcurrencyto match CPU cores.
FFmpeg requirements
Video/GIF classification requires FFmpeg.
- Bundled:
ffmpeg-staticis used if present. - System: Otherwise a system
ffmpegbinary on PATH is used.
We include a small diagnostic helper internally that prefers ffmpeg-static and falls back to system PATH.
Examples
See advanced-example.js for a comprehensive demo with environment toggles:
BACKEND=tfjs-nodeortfjsIO_MODE=ramordiskFRAMES=8,TURBO=true,MAX_BYTES=10485760
Run with Bun:
bun run advanced-example.jsContributing
PRs are welcome! Please open an issue to discuss substantial changes first.
License
This project is licensed under the Apache-2.0 License. See the LICENSE file for details.
