npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

explainai-core

v1.0.2

Published

Core explainability algorithms and model interfaces for ExplainAI

Readme

explainai-core

Core explainability algorithms and model interfaces for ExplainAI.

npm version License: MIT

Installation

npm install explainai-core

Features

  • 🔍 SHAP (SHapley Additive exPlanations) - Model-agnostic feature importance
  • 🎯 LIME (Local Interpretable Model-agnostic Explanations) - Local explanations
  • 🌐 Universal Model Support - Works with any prediction function
  • High Performance - Optimized sampling and computation
  • 📦 Zero Dependencies - Lightweight and standalone
  • 🔒 Privacy-First - All computation runs locally

Quick Start

import { explain, createApiModel } from 'explainai-core';

// Create a model that calls your API
const model = createApiModel(
  {
    endpoint: 'http://localhost:3000/predict',
    method: 'POST',
    headers: { 'Content-Type': 'application/json' }
  },
  {
    inputShape: [10],
    outputShape: [1],
    modelType: 'regression',
    provider: 'api'
  }
);

// Generate SHAP explanation
const explanation = await explain(model, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], {
  method: 'shap',
  config: {
    samples: 100
  }
});

console.log(explanation);
// {
//   method: 'shap',
//   featureImportance: [
//     { feature: 0, importance: 0.45, ... },
//     { feature: 1, importance: -0.23, ... },
//     ...
//   ],
//   prediction: { value: 42.5 },
//   baseValue: 38.2
// }

API Overview

Main Functions

explain(model, input, options)

Generate explanations for model predictions.

const explanation = await explain(model, input, {
  method: 'shap' | 'lime',
  config: {
    samples: 100,
    featureNames?: string[]
  }
});

createApiModel(apiConfig, metadata)

Create a model wrapper for REST API endpoints.

const model = createApiModel(
  {
    endpoint: 'https://api.example.com/predict',
    method: 'POST',
    headers: { 'Authorization': 'Bearer token' }
  },
  {
    inputShape: [10],
    outputShape: [1],
    modelType: 'classification',
    provider: 'api'
  }
);

createCustomModel(predictFn, metadata)

Wrap any prediction function.

const model = createCustomModel(
  async (input: number[]) => {
    // Your custom prediction logic
    return input.reduce((a, b) => a + b, 0);
  },
  {
    inputShape: [10],
    outputShape: [1],
    modelType: 'regression',
    provider: 'custom'
  }
);

Explainability Methods

SHAP (Shapley Values)

import { explainWithShap } from 'explainai-core';

const explanation = await explainWithShap(model, input, {
  samples: 100,
  featureNames: ['feature1', 'feature2', ...]
});

Best for:

  • Global feature importance
  • Understanding overall model behavior
  • Additive feature contributions

LIME (Local Interpretable Model)

import { explainWithLime } from 'explainai-core';

const explanation = await explainWithLime(model, input, {
  samples: 100,
  featureNames: ['feature1', 'feature2', ...]
});

Best for:

  • Local explanations (individual predictions)
  • Understanding specific decisions
  • Fast approximations

Model Types

Classification Models

const model = createApiModel(apiConfig, {
  modelType: 'classification',
  inputShape: [784], // e.g., 28x28 image flattened
  outputShape: [10], // 10 classes
  provider: 'api'
});

Regression Models

const model = createApiModel(apiConfig, {
  modelType: 'regression',
  inputShape: [13], // e.g., housing features
  outputShape: [1], // single value prediction
  provider: 'api'
});

Advanced Usage

Custom Prediction Function

import { createCustomModel, explain } from 'explainai-core';

// Wrap TensorFlow.js model
const tfModel = await tf.loadLayersModel('model.json');
const model = createCustomModel(
  async (input: number[]) => {
    const tensor = tf.tensor2d([input]);
    const prediction = tfModel.predict(tensor) as tf.Tensor;
    return prediction.dataSync()[0];
  },
  metadata
);

const explanation = await explain(model, input, { method: 'shap' });

Batch Predictions

import { batchPredict } from 'explainai-core';

const inputs = [
  [1, 2, 3, 4, 5],
  [6, 7, 8, 9, 10],
  [11, 12, 13, 14, 15]
];

const predictions = await batchPredict(model, inputs);

TypeScript Support

Full TypeScript definitions included:

import type {
  Model,
  Explanation,
  ExplainabilityMethod,
  FeatureImportance,
  ModelMetadata,
  InputData,
  PredictionResult
} from 'explainai-core';

Performance Tips

  1. Sample Size: More samples = more accurate but slower

    • SHAP: 100-500 samples for most cases
    • LIME: 50-200 samples usually sufficient
  2. Batch Processing: Use batchPredict for multiple inputs

  3. Caching: Cache model predictions when possible

Error Handling

import { ExplainAIError } from 'explainai-core';

try {
  const explanation = await explain(model, input, options);
} catch (error) {
  if (error instanceof ExplainAIError) {
    console.error('ExplainAI Error:', error.message);
    console.error('Details:', error.details);
  }
}

Related Packages

Documentation

Requirements

  • Node.js ≥18.0.0
  • TypeScript ≥5.0.0 (for TypeScript projects)

License

MIT - see LICENSE

Contributing

Contributions welcome! See Contributing Guide

Author

Yash Gupta (@gyash1512)

Repository

github.com/gyash1512/ExplainAI