npu-detect
v1.2.1
Published
Cross-platform NPU (Neural Processing Unit) detection for JavaScript — Node.js + Browser (WebNN)
Downloads
119
Maintainers
Readme
npu-detect
Cross-platform NPU (Neural Processing Unit) detection for JavaScript.
Works in Node.js (server-side, shells out to OS tools) and Browsers (client-side, via the WebNN API).
Why?
You can query CPU info with os.cpus() and GPU info with navigator.gpu — but there's no equivalent for NPUs. This library fills that gap.
Install
npm install npu-detectFor OpenVINO deep probe and stress test (Intel NPUs only):
npm install npu-detect openvino-nodeWhat's New in v1.2.0
--stress/-sCLI flag — runs a sustained NPU workload that produces a visible spike in Task Manager → Performance → NPUstressNPU(options)— programmatic API for the same stress test--stress-duration N— control how long the stress test runs (default: 3s)- Bug fixes for Level Zero host memory exhaustion during stress on Intel Meteor Lake NPUs
See v1.1.0 changes below for the OpenVINO deep probe and driver metadata features.
What's New in v1.1.0
--openvino/--ovCLI flag — deep NPU probe via OpenVINO: architecture, generation, TOPS estimate, precision support, functional testprobeOpenVINO()— programmatic API for the OpenVINO probe- Enhanced Windows driver metadata — driver version, driver date, hardware VEN/DEV IDs, vendor classification
- DirectML availability check (Windows)
detectNPU({ openvino: true })— integrate OpenVINO probe into the main detection result
Quick Start
Node.js
const { detectNPU, hasNPU, npuSummary } = require('npu-detect');
// Full detection with details
const info = await detectNPU();
console.log(info);
// {
// detected: true,
// platform: 'windows',
// devices: [{ name: 'Intel(R) AI Boost', driverVersion: '32.0.100.4512', vendor: 'Intel', ... }],
// cpuHeuristic: { likely: true, vendor: 'Intel', npuName: 'Intel AI Boost' },
// directML: { available: true, version: '1.15.2' },
// ...
// }
// With OpenVINO deep probe (requires: npm install openvino-node)
const info = await detectNPU({ openvino: true });
console.log(info.openVINO);
// {
// npuFound: true,
// fullDeviceName: 'Intel(R) AI Boost',
// architecture: 'MTL',
// generation: { codeName: 'Meteor Lake', estimatedTops: 11, nf4Supported: false },
// optimizationCapabilities: ['FP16', 'INT8'],
// functionalTest: { functional: true, compileLatencyMs: 10, inferLatencyMs: 4 },
// ...
// }
// Quick boolean
const has = await hasNPU(); // true
// One-liner summary
const summary = await npuSummary(); // "NPU detected: Intel(R) AI Boost"Stress Test (Node.js)
const { stressNPU } = require('npu-detect');
const result = await stressNPU({
durationMs: 10000,
onReady: () => console.log('compiled — stress loop running'),
onProgress: ({ elapsed, totalInferences, instantThroughput }) => {
process.stdout.write(`\r${elapsed.toFixed(1)}s — ${Math.round(instantThroughput)} inf/s`);
},
});
console.log(`\n${result.gops} GOPS (${result.avgThroughputInfPerSec} inf/s avg)`);Browser
import { detectNPU, hasNPU } from 'npu-detect/browser';
const info = await detectNPU({ benchmark: true });
console.log(info);
// {
// webnnSupported: true,
// npuAvailable: true,
// devices: {
// npu: { available: true, benchmark: { functional: true, execLatencyMs: 0.4 } },
// cpu: { available: true, ... },
// gpu: { available: true, ... },
// },
// ...
// }CLI
# Basic detection
npx npu-detect
# Deep probe via OpenVINO (requires: npm install openvino-node)
npx npu-detect --openvino
# JSON output for scripting
npx npu-detect --openvino --json
# NPU stress test (produces visible spike in Task Manager → NPU)
npx npu-detect --stress
# Stress test for 30 seconds
npx npu-detect --stress --stress-duration 30
# All flags
npx npu-detect --helpHow It Works
Node.js Detection (per platform)
| Platform | Method |
|----------|--------|
| Linux | Checks /dev/accel/* devices, lspci for NPU-class hardware, lsmod for intel_vpu/amdxdna kernel modules, sysfs for NPU frequency |
| Windows | Queries Device Manager's "Neural processors" class via PowerShell (Get-PnpDevice), falls back to WMI. Enhanced probe adds driver version/date and hardware VEN/DEV IDs. |
| macOS | Uses ioreg to find AppleNeuralEngine, infers ANE core count and TOPS from chip model, reads powermetrics for ANE power (requires root) |
| All | CPU model string heuristic to identify known NPU-equipped processors (Intel Core Ultra, Snapdragon X, Ryzen AI, Apple M-series, etc.) |
OpenVINO Deep Probe (Intel NPUs, optional)
When openvino-node is installed and openvino: true is passed (or --openvino via CLI), a second pass uses the OpenVINO runtime to query the NPU directly:
- Device architecture (e.g.
MTL,LNL,ARL) mapped to Intel generation and TOPS - Supported precisions (
FP16,INT8,INT4,NF4) - Optimal inference request count and async range
- Functional test: compiles a small model and runs one inference, measures compile + infer latency
NPU Stress Test (Intel NPUs, optional)
Requires openvino-node. Compiles a 4× FC(512→512) + ReLU model (~537M ops/inference) and runs a tight synchronous infer() loop for the specified duration. Designed to produce a visible utilization spike in Task Manager → Performance → NPU.
Browser Detection
Uses the W3C Web Neural Network API (navigator.ml):
- Checks if
navigator.mlexists - Attempts
navigator.ml.createContext({ deviceType: 'npu' })— success means NPU is available - Optionally runs a minimal add-two-matrices computation graph to verify the device is functional
- Also probes
cpuandgpudevice types for comparison
Note: WebNN requires Chrome/Edge with the flag
chrome://flags/#web-machine-learning-neural-networkenabled, or Edge with--disable_webnn_for_npu=0for NPU specifically.
Supported NPU Hardware
| Vendor | NPU | Detection |
|--------|-----|-----------|
| Intel | AI Boost (Meteor Lake, Arrow Lake, Lunar Lake) | Linux: intel_vpu module + /dev/accel/accel0. Windows: Device Manager + driver metadata. OpenVINO: architecture, TOPS, precisions. |
| AMD | XDNA / XDNA2 (Ryzen AI, Hawk Point, Strix Point) | Linux: amdxdna module. Windows: Device Manager. |
| Qualcomm | Hexagon (Snapdragon X Elite/Plus) | Windows: Device Manager. Browser: WebNN. |
| Apple | Neural Engine (M1–M4, A11–A19) | macOS: ioreg + chip identification. |
Intel NPU Generation Map (OpenVINO)
| Architecture | Code Name | Product Family | TOPS | NF4 |
|---|---|---|---|---|
| MTL | Meteor Lake | Intel Core Ultra Series 1 | 11 | No |
| ARL | Arrow Lake | Intel Core Ultra Series 2 | 13 | Yes |
| LNL | Lunar Lake | Intel Core Ultra Series 2 (Lunar) | 48 | Yes |
| PTL | Panther Lake | Intel Core Ultra Series 3 | TBD | Yes |
API Reference
Node.js (npu-detect)
detectNPU(options?): Promise<NPUDetectionResult>
Full detection. Options:
cpuHeuristic(boolean, defaulttrue) — also check CPU model stringverbose(boolean, defaultfalse) — include raw OS command outputopenvino(boolean, defaultfalse) — run OpenVINO deep probe (requiresopenvino-node)
hasNPU(): Promise<boolean>
Quick boolean — confirmed NPU present?
npuSummary(): Promise<string>
Human-readable one-liner.
probeOpenVINO(): Promise<OpenVINOProbeResult>
Directly probe OpenVINO for NPU capabilities. Requires openvino-node.
stressNPU(options?): Promise<StressResult>
Run a sustained NPU workload. Requires openvino-node. Options:
durationMs(number, default3000) — how long to run in millisecondsonReady(function) — called once the model has compiled and the loop is about to startonProgress(function) — called ~4×/sec with{ elapsed, totalInferences, instantThroughput, avgThroughput }
Returns { success, gops, avgThroughputInfPerSec, peakThroughputInfPerSec, totalInferences, durationMs }.
Browser (npu-detect/browser)
detectNPU(options?): Promise<BrowserNPUDetectionResult>
Full browser detection. Options:
benchmark(boolean, defaultfalse) — run a test computation on each deviceprobeAll(boolean, defaulttrue) — also probe CPU and GPU
hasNPU(): Promise<boolean>
Quick boolean — NPU available via WebNN?
probeDevice(deviceType): Promise<{available, error?, context?}>
Low-level probe for a specific WebNN device type.
benchmarkDevice(deviceType): Promise<{functional, buildLatencyMs?, execLatencyMs?, error?}>
Run a minimal computation on a device and measure latency.
Caveats
- Linux: NPU detection requires the kernel module to be loaded and
/dev/accel/to be populated. Some sysfs paths require root. - Windows: PowerShell queries may be slow (~1–2s). The "NeuralProcessor" PnP class requires up-to-date drivers.
- macOS: Apple provides almost no public API for ANE introspection. Detection relies on
ioregand chip identification.powermetricsrequiressudo. - Browser: WebNN is still an emerging standard. NPU support requires specific browser flags and up-to-date GPU/NPU drivers.
- CPU Heuristic: The CPU model string check is a "best guess" — it tells you the CPU should have an NPU, not that it's accessible.
- OpenVINO:
DEVICE_ARCHITECTUREreturns the VPU revision number (e.g.3720) which maps to Meteor Lake. This is expected — not a bug. - Stress test: First compile takes 10–60s (OpenVINO caches the compiled model after the first run). Uses
LATENCYperformance hint to avoid Level Zero host memory exhaustion on systems with limited pinned memory.
License
MIT
