npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

explainai-node

v1.0.2

Published

Node.js CLI tools for ExplainAI

Readme

explainai-node

Node.js CLI tools and server-side utilities for ExplainAI.

npm version License: MIT

Installation

# Global installation (for CLI usage)
npm install -g explainai-node

# Local installation (for programmatic usage)
npm install explainai-node

Features

  • 🖥️ CLI Tools - Command-line interface for model explanations
  • 📁 File I/O - Read/write explanations from files
  • 🔧 Node.js Utilities - Server-side helper functions
  • 📊 Batch Processing - Explain multiple inputs at once
  • 🚀 Fast - Optimized for server environments
  • 📦 Re-exports Core - All core functionality included

CLI Usage

Generate Explanations

# Basic SHAP explanation
explainai explain \
  --method shap \
  --input data.json \
  --endpoint http://localhost:3000/predict \
  --output results.json

# LIME explanation with custom samples
explainai explain \
  --method lime \
  --input features.json \
  --endpoint https://api.example.com/predict \
  --samples 200 \
  --output lime-results.json

Validate Model Configuration

explainai validate \
  --endpoint http://localhost:3000/predict \
  --type classification \
  --input-shape 1,784 \
  --output-shape 1,10

CLI Options

explain command

  • --method <method> - Explainability method: shap or lime (default: shap)
  • --input <file> - Input data JSON file (required)
  • --endpoint <url> - Model API endpoint (required)
  • --samples <number> - Number of samples (default: 100)
  • --output <file> - Output file for results (optional, prints to stdout if omitted)

validate command

  • --endpoint <url> - Model API endpoint (required)
  • --type <type> - Model type: classification or regression (default: classification)
  • --input-shape <shape> - Input shape as comma-separated values (default: 1,10)
  • --output-shape <shape> - Output shape as comma-separated values (default: 1)

Input File Format

Create a JSON file with your input data:

[1.5, 2.3, 4.1, 0.8, 3.2, 1.9, 2.7, 4.5, 0.6, 3.8]

Or for multiple features:

{
  "features": [1.5, 2.3, 4.1, 0.8, 3.2],
  "metadata": {
    "timestamp": "2025-10-27T00:00:00Z"
  }
}

Output Format

Results are saved as JSON:

{
  "method": "shap",
  "featureImportance": [
    {
      "feature": 0,
      "importance": 0.452,
      "name": "feature_0"
    },
    {
      "feature": 1,
      "importance": -0.234,
      "name": "feature_1"
    }
  ],
  "prediction": {
    "value": 42.5
  },
  "baseValue": 38.2
}

Programmatic Usage

Node.js Server

import { explainCommand, validateCommand } from 'explainai-node';
import { explain, createApiModel } from 'explainai-node';
import express from 'express';

const app = express();
app.use(express.json());

app.post('/explain', async (req, res) => {
  const { input, modelEndpoint } = req.body;

  const model = createApiModel(
    { endpoint: modelEndpoint },
    {
      inputShape: [input.length],
      outputShape: [1],
      modelType: 'regression',
      provider: 'api'
    }
  );

  const explanation = await explain(model, input, {
    method: 'shap',
    config: { samples: 100 }
  });

  res.json(explanation);
});

app.listen(3000, () => {
  console.log('Explanation API running on port 3000');
});

Batch Processing

import { explain, createApiModel, batchPredict } from 'explainai-node';
import { readFile, writeFile } from 'fs/promises';

async function batchExplain() {
  // Read multiple inputs
  const inputs = JSON.parse(await readFile('inputs.json', 'utf-8'));
  
  const model = createApiModel(
    { endpoint: 'http://localhost:3000/predict' },
    metadata
  );

  // Process in parallel
  const explanations = await Promise.all(
    inputs.map(input => 
      explain(model, input, { method: 'shap' })
    )
  );

  // Save results
  await writeFile(
    'explanations.json',
    JSON.stringify(explanations, null, 2)
  );
}

batchExplain();

File Processing

import { readFile, writeFile } from 'fs/promises';
import { explain, createCustomModel } from 'explainai-node';

async function processFile(inputPath: string, outputPath: string) {
  // Read input
  const inputData = JSON.parse(await readFile(inputPath, 'utf-8'));

  // Create model
  const model = createCustomModel(
    async (input: number[]) => {
      // Your prediction logic
      return input.reduce((a, b) => a + b, 0);
    },
    metadata
  );

  // Generate explanation
  const result = await explain(model, inputData, {
    method: 'lime',
    config: { samples: 200 }
  });

  // Write output
  await writeFile(outputPath, JSON.stringify(result, null, 2));
  console.log(`✅ Results saved to ${outputPath}`);
}

Integration with Build Tools

npm Scripts

{
  "scripts": {
    "explain": "explainai explain --input data.json --endpoint http://localhost:3000/predict",
    "validate": "explainai validate --endpoint http://localhost:3000/predict"
  }
}

CI/CD Pipeline

# .github/workflows/explain.yml
name: Model Explanation

on: [push]

jobs:
  explain:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      
      - name: Install explainai-node
        run: npm install -g explainai-node
      
      - name: Generate explanations
        run: |
          explainai explain \
            --input test-data.json \
            --endpoint ${{ secrets.MODEL_ENDPOINT }} \
            --output explanations.json
      
      - name: Upload results
        uses: actions/upload-artifact@v4
        with:
          name: explanations
          path: explanations.json

TypeScript Support

Full TypeScript definitions included:

import type { 
  CommandOptions,
  ExplainOptions,
  ValidateOptions
} from 'explainai-node';

// Re-exports all core types
import type {
  Model,
  Explanation,
  ExplainabilityMethod
} from 'explainai-node';

Error Handling

import { explain, ExplainAIError } from 'explainai-node';

try {
  const explanation = await explain(model, input, options);
} catch (error) {
  if (error instanceof ExplainAIError) {
    console.error('ExplainAI Error:', error.message);
    process.exit(1);
  }
  throw error;
}

Environment Variables

# Set default model endpoint
export EXPLAINAI_ENDPOINT=http://localhost:3000/predict

# Set default samples
export EXPLAINAI_SAMPLES=200

# Use in CLI (endpoint not required if env var set)
explainai explain --input data.json

Docker Usage

FROM node:20-alpine

# Install explainai-node
RUN npm install -g explainai-node

# Copy data
COPY data.json /data/

# Run explanation
CMD ["explainai", "explain", \
     "--input", "/data/data.json", \
     "--endpoint", "http://model-api:3000/predict", \
     "--output", "/data/results.json"]

Performance Tips

  1. Parallel Processing: Use Promise.all() for batch operations
  2. Caching: Cache model responses when possible
  3. Sample Size: Balance accuracy vs. speed (100-200 samples usually optimal)
  4. Stream Large Files: Use streams for large datasets

Related Packages

Documentation

Requirements

  • Node.js ≥18.0.0
  • npm ≥9.0.0

License

MIT - see LICENSE

Contributing

Contributions welcome! See Contributing Guide

Author

Yash Gupta (@gyash1512)

Repository

github.com/gyash1512/ExplainAI