explainai-node
v1.0.2
Published
Node.js CLI tools for ExplainAI
Maintainers
Readme
explainai-node
Node.js CLI tools and server-side utilities for ExplainAI.
Installation
# Global installation (for CLI usage)
npm install -g explainai-node
# Local installation (for programmatic usage)
npm install explainai-nodeFeatures
- 🖥️ CLI Tools - Command-line interface for model explanations
- 📁 File I/O - Read/write explanations from files
- 🔧 Node.js Utilities - Server-side helper functions
- 📊 Batch Processing - Explain multiple inputs at once
- 🚀 Fast - Optimized for server environments
- 📦 Re-exports Core - All core functionality included
CLI Usage
Generate Explanations
# Basic SHAP explanation
explainai explain \
--method shap \
--input data.json \
--endpoint http://localhost:3000/predict \
--output results.json
# LIME explanation with custom samples
explainai explain \
--method lime \
--input features.json \
--endpoint https://api.example.com/predict \
--samples 200 \
--output lime-results.jsonValidate Model Configuration
explainai validate \
--endpoint http://localhost:3000/predict \
--type classification \
--input-shape 1,784 \
--output-shape 1,10CLI Options
explain command
--method <method>- Explainability method:shaporlime(default: shap)--input <file>- Input data JSON file (required)--endpoint <url>- Model API endpoint (required)--samples <number>- Number of samples (default: 100)--output <file>- Output file for results (optional, prints to stdout if omitted)
validate command
--endpoint <url>- Model API endpoint (required)--type <type>- Model type:classificationorregression(default: classification)--input-shape <shape>- Input shape as comma-separated values (default: 1,10)--output-shape <shape>- Output shape as comma-separated values (default: 1)
Input File Format
Create a JSON file with your input data:
[1.5, 2.3, 4.1, 0.8, 3.2, 1.9, 2.7, 4.5, 0.6, 3.8]Or for multiple features:
{
"features": [1.5, 2.3, 4.1, 0.8, 3.2],
"metadata": {
"timestamp": "2025-10-27T00:00:00Z"
}
}Output Format
Results are saved as JSON:
{
"method": "shap",
"featureImportance": [
{
"feature": 0,
"importance": 0.452,
"name": "feature_0"
},
{
"feature": 1,
"importance": -0.234,
"name": "feature_1"
}
],
"prediction": {
"value": 42.5
},
"baseValue": 38.2
}Programmatic Usage
Node.js Server
import { explainCommand, validateCommand } from 'explainai-node';
import { explain, createApiModel } from 'explainai-node';
import express from 'express';
const app = express();
app.use(express.json());
app.post('/explain', async (req, res) => {
const { input, modelEndpoint } = req.body;
const model = createApiModel(
{ endpoint: modelEndpoint },
{
inputShape: [input.length],
outputShape: [1],
modelType: 'regression',
provider: 'api'
}
);
const explanation = await explain(model, input, {
method: 'shap',
config: { samples: 100 }
});
res.json(explanation);
});
app.listen(3000, () => {
console.log('Explanation API running on port 3000');
});Batch Processing
import { explain, createApiModel, batchPredict } from 'explainai-node';
import { readFile, writeFile } from 'fs/promises';
async function batchExplain() {
// Read multiple inputs
const inputs = JSON.parse(await readFile('inputs.json', 'utf-8'));
const model = createApiModel(
{ endpoint: 'http://localhost:3000/predict' },
metadata
);
// Process in parallel
const explanations = await Promise.all(
inputs.map(input =>
explain(model, input, { method: 'shap' })
)
);
// Save results
await writeFile(
'explanations.json',
JSON.stringify(explanations, null, 2)
);
}
batchExplain();File Processing
import { readFile, writeFile } from 'fs/promises';
import { explain, createCustomModel } from 'explainai-node';
async function processFile(inputPath: string, outputPath: string) {
// Read input
const inputData = JSON.parse(await readFile(inputPath, 'utf-8'));
// Create model
const model = createCustomModel(
async (input: number[]) => {
// Your prediction logic
return input.reduce((a, b) => a + b, 0);
},
metadata
);
// Generate explanation
const result = await explain(model, inputData, {
method: 'lime',
config: { samples: 200 }
});
// Write output
await writeFile(outputPath, JSON.stringify(result, null, 2));
console.log(`✅ Results saved to ${outputPath}`);
}Integration with Build Tools
npm Scripts
{
"scripts": {
"explain": "explainai explain --input data.json --endpoint http://localhost:3000/predict",
"validate": "explainai validate --endpoint http://localhost:3000/predict"
}
}CI/CD Pipeline
# .github/workflows/explain.yml
name: Model Explanation
on: [push]
jobs:
explain:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install explainai-node
run: npm install -g explainai-node
- name: Generate explanations
run: |
explainai explain \
--input test-data.json \
--endpoint ${{ secrets.MODEL_ENDPOINT }} \
--output explanations.json
- name: Upload results
uses: actions/upload-artifact@v4
with:
name: explanations
path: explanations.jsonTypeScript Support
Full TypeScript definitions included:
import type {
CommandOptions,
ExplainOptions,
ValidateOptions
} from 'explainai-node';
// Re-exports all core types
import type {
Model,
Explanation,
ExplainabilityMethod
} from 'explainai-node';Error Handling
import { explain, ExplainAIError } from 'explainai-node';
try {
const explanation = await explain(model, input, options);
} catch (error) {
if (error instanceof ExplainAIError) {
console.error('ExplainAI Error:', error.message);
process.exit(1);
}
throw error;
}Environment Variables
# Set default model endpoint
export EXPLAINAI_ENDPOINT=http://localhost:3000/predict
# Set default samples
export EXPLAINAI_SAMPLES=200
# Use in CLI (endpoint not required if env var set)
explainai explain --input data.jsonDocker Usage
FROM node:20-alpine
# Install explainai-node
RUN npm install -g explainai-node
# Copy data
COPY data.json /data/
# Run explanation
CMD ["explainai", "explain", \
"--input", "/data/data.json", \
"--endpoint", "http://model-api:3000/predict", \
"--output", "/data/results.json"]Performance Tips
- Parallel Processing: Use
Promise.all()for batch operations - Caching: Cache model responses when possible
- Sample Size: Balance accuracy vs. speed (100-200 samples usually optimal)
- Stream Large Files: Use streams for large datasets
Related Packages
- explainai-core - Core algorithms (included)
- explainai-ui - React visualization components
- explainai-playground - Interactive demo
Documentation
Requirements
- Node.js ≥18.0.0
- npm ≥9.0.0
License
MIT - see LICENSE
Contributing
Contributions welcome! See Contributing Guide
Author
Yash Gupta (@gyash1512)
