npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@wlearn/ebm

v0.1.0

Published

InterpretML EBM (Explainable Boosting Machine) compiled to WebAssembly -- interpretable ML in browsers and Node.js

Readme

@wlearn/ebm

InterpretML's Explainable Boosting Machine (EBM) compiled to WebAssembly. Interpretable machine learning in browsers and Node.js -- no server required, data stays local.

EBM is a Generalized Additive Model (GAM) trained via cyclic gradient boosting, one feature at a time. It produces per-feature shape functions that are inherently interpretable while achieving accuracy competitive with black-box models.

Based on InterpretML v0.7.5 (MIT license). Zero dependencies. ESM.

Install

npm install @wlearn/ebm

Quick start

import { EBMModel } from '@wlearn/ebm'

const model = await EBMModel.create({ maxRounds: 500, seed: 42 })

// Train -- accepts number[][] or { data: Float64Array, rows, cols }
model.fit(
  [[1, 2], [3, 4], [5, 6], [7, 8]],
  [0, 0, 1, 1]
)

// Predict
const preds = model.predict([[2, 3], [6, 7]])        // Float64Array
const probs = model.predictProba([[2, 3], [6, 7]])    // Float64Array (nrow * nclass)
const accuracy = model.score([[2, 3], [6, 7]], [0, 1])

// Explain
const explanations = model.explain([[2, 3]])
// { intercept, contributions, termNames, nTerms, nSamples, nScores }
// prediction = intercept + sum(contributions)

const importances = model.featureImportances()  // Float64Array per term
const shape = model.getShapeFunction(0)         // { x, y } for plotting

// Save / load
const buf = model.save()  // Uint8Array (WLRN bundle)
const model2 = await EBMModel.load(buf)

// Clean up -- required, WASM memory is not garbage collected
model.dispose()
model2.dispose()

Explainability

EBM's primary advantage over black-box models is built-in interpretability.

Local explanations (explain(X)) return per-sample, per-term additive contributions. For each sample, the prediction equals intercept + sum(contributions). This tells you exactly how much each feature contributed to every prediction.

Global importances (featureImportances()) return mean absolute scores across bins for each term, showing which features matter most overall.

Shape functions (getShapeFunction(i)) return the learned response curve for each feature. For univariate terms this gives { x, y } arrays for direct plotting. For interaction terms it gives { features, binCounts, scores }.

API

EBMModel.create(params?) -> Promise<EBMModel>

Async factory. Loads WASM module on first call, returns a ready-to-use model.

model.fit(X, y) -> this

Train the model. Returns this.

  • X -- number[][] or { data: Float64Array, rows, cols }
  • y -- number[] or Float64Array

Task is auto-detected: integer labels become classification, non-integer becomes regression. Override with objective: 'regression' param.

model.predict(X) -> Float64Array

Predict class labels (classification) or values (regression).

model.predictProba(X) -> Float64Array

Predict class probabilities. Returns flat array of shape nSamples * nClasses (row-major). Binary: [P(0), P(1)] per sample. Classification only.

model.score(X, y) -> number

Accuracy (classification) or R-squared (regression).

model.explain(X) -> object

Local explanations: per-sample, per-term additive contributions.

Returns { intercept, contributions, termNames, nTerms, nSamples, nScores }. The contributions array is flat: nSamples * nTerms * nScores. For each sample: prediction = intercept + sum(contributions[sample]).

model.featureImportances() -> Float64Array

Global feature importances: mean absolute score per term. Length equals number of terms.

model.getShapeFunction(termIndex) -> object

Shape function for a single term, useful for visualization.

  • Univariate: { x: Float64Array, y: Float64Array } -- bin edges and scores
  • Interaction: { features, binCounts, scores, nScores } -- 2D grid data

model.save() / EBMModel.load(buffer)

Save to / load from Uint8Array (WLRN bundle with JSON model blob).

model.dispose()

Free WASM memory. Required. Idempotent.

model.getParams() / model.setParams(p)

Get/set hyperparameters. Enables AutoML grid search and cloning.

EBMModel.defaultSearchSpace()

Returns default hyperparameter search space for AutoML.

Parameters

| Parameter | Default | Description | |-----------|---------|-------------| | objective | auto | 'classification' or 'regression'. Auto-detected from y if not set | | learningRate | 0.01 | Boosting learning rate | | maxRounds | 5000 | Maximum boosting rounds | | earlyStoppingRounds | 50 | Rounds without improvement before stopping | | maxLeaves | 3 | Maximum leaves per boosting step | | minSamplesLeaf | 2 | Minimum samples per leaf | | maxInteractions | 10 | Number of interaction terms (0 = no interactions) | | maxBins | 256 | Maximum bins per feature | | minSamplesBin | 1 | Minimum samples per bin | | outerBags | 8 | Number of outer bags | | innerBags | 0 | Number of inner bags | | regAlpha | 0 | L1 regularization | | regLambda | 0 | L2 regularization | | seed | 42 | Random seed |

Cross-runtime compatibility

Models saved in JS load and predict identically in the Python wlearn package. WLRN bundles round-trip between JS and Python (blob bytes are preserved). The Python wrapper uses pure numpy for prediction (no native InterpretML dependency needed at inference time).

import wlearn.ebm
from wlearn import load

model = load(open('model.wlrn', 'rb').read())
preds = model.predict(X)
model.save()  # produces identical bundle

The Python wrapper also supports training via the interpret package:

from wlearn.ebm import EBMModel

model = EBMModel.create({'seed': 42, 'maxRounds': 5000})
model.fit(X_train, y_train)  # requires: pip install interpret
preds = model.predict(X_test)
bundle = model.save()  # WLRN bundle loadable from JS

Resource management

WASM heap memory is not garbage collected. Call .dispose() on every model when done. A FinalizationRegistry safety net warns if you forget, but do not rely on it.

Build from source

Requires Emscripten (emsdk) activated.

git clone --recurse-submodules https://github.com/wlearn-org/ebm-wasm
cd ebm-wasm
bash scripts/build-wasm.sh
node --experimental-vm-modules test/test.js

If you already cloned without --recurse-submodules:

git submodule update --init

Upstream

Based on InterpretML libebm v0.7.5 (MIT license). See UPSTREAM.md for details.

License

MIT (same as upstream InterpretML)