explainai-core
v1.0.2
Published
Core explainability algorithms and model interfaces for ExplainAI
Maintainers
Readme
explainai-core
Core explainability algorithms and model interfaces for ExplainAI.
Installation
npm install explainai-coreFeatures
- 🔍 SHAP (SHapley Additive exPlanations) - Model-agnostic feature importance
- 🎯 LIME (Local Interpretable Model-agnostic Explanations) - Local explanations
- 🌐 Universal Model Support - Works with any prediction function
- ⚡ High Performance - Optimized sampling and computation
- 📦 Zero Dependencies - Lightweight and standalone
- 🔒 Privacy-First - All computation runs locally
Quick Start
import { explain, createApiModel } from 'explainai-core';
// Create a model that calls your API
const model = createApiModel(
{
endpoint: 'http://localhost:3000/predict',
method: 'POST',
headers: { 'Content-Type': 'application/json' }
},
{
inputShape: [10],
outputShape: [1],
modelType: 'regression',
provider: 'api'
}
);
// Generate SHAP explanation
const explanation = await explain(model, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], {
method: 'shap',
config: {
samples: 100
}
});
console.log(explanation);
// {
// method: 'shap',
// featureImportance: [
// { feature: 0, importance: 0.45, ... },
// { feature: 1, importance: -0.23, ... },
// ...
// ],
// prediction: { value: 42.5 },
// baseValue: 38.2
// }API Overview
Main Functions
explain(model, input, options)
Generate explanations for model predictions.
const explanation = await explain(model, input, {
method: 'shap' | 'lime',
config: {
samples: 100,
featureNames?: string[]
}
});createApiModel(apiConfig, metadata)
Create a model wrapper for REST API endpoints.
const model = createApiModel(
{
endpoint: 'https://api.example.com/predict',
method: 'POST',
headers: { 'Authorization': 'Bearer token' }
},
{
inputShape: [10],
outputShape: [1],
modelType: 'classification',
provider: 'api'
}
);createCustomModel(predictFn, metadata)
Wrap any prediction function.
const model = createCustomModel(
async (input: number[]) => {
// Your custom prediction logic
return input.reduce((a, b) => a + b, 0);
},
{
inputShape: [10],
outputShape: [1],
modelType: 'regression',
provider: 'custom'
}
);Explainability Methods
SHAP (Shapley Values)
import { explainWithShap } from 'explainai-core';
const explanation = await explainWithShap(model, input, {
samples: 100,
featureNames: ['feature1', 'feature2', ...]
});Best for:
- Global feature importance
- Understanding overall model behavior
- Additive feature contributions
LIME (Local Interpretable Model)
import { explainWithLime } from 'explainai-core';
const explanation = await explainWithLime(model, input, {
samples: 100,
featureNames: ['feature1', 'feature2', ...]
});Best for:
- Local explanations (individual predictions)
- Understanding specific decisions
- Fast approximations
Model Types
Classification Models
const model = createApiModel(apiConfig, {
modelType: 'classification',
inputShape: [784], // e.g., 28x28 image flattened
outputShape: [10], // 10 classes
provider: 'api'
});Regression Models
const model = createApiModel(apiConfig, {
modelType: 'regression',
inputShape: [13], // e.g., housing features
outputShape: [1], // single value prediction
provider: 'api'
});Advanced Usage
Custom Prediction Function
import { createCustomModel, explain } from 'explainai-core';
// Wrap TensorFlow.js model
const tfModel = await tf.loadLayersModel('model.json');
const model = createCustomModel(
async (input: number[]) => {
const tensor = tf.tensor2d([input]);
const prediction = tfModel.predict(tensor) as tf.Tensor;
return prediction.dataSync()[0];
},
metadata
);
const explanation = await explain(model, input, { method: 'shap' });Batch Predictions
import { batchPredict } from 'explainai-core';
const inputs = [
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15]
];
const predictions = await batchPredict(model, inputs);TypeScript Support
Full TypeScript definitions included:
import type {
Model,
Explanation,
ExplainabilityMethod,
FeatureImportance,
ModelMetadata,
InputData,
PredictionResult
} from 'explainai-core';Performance Tips
Sample Size: More samples = more accurate but slower
- SHAP: 100-500 samples for most cases
- LIME: 50-200 samples usually sufficient
Batch Processing: Use
batchPredictfor multiple inputsCaching: Cache model predictions when possible
Error Handling
import { ExplainAIError } from 'explainai-core';
try {
const explanation = await explain(model, input, options);
} catch (error) {
if (error instanceof ExplainAIError) {
console.error('ExplainAI Error:', error.message);
console.error('Details:', error.details);
}
}Related Packages
- explainai-ui - React visualization components
- explainai-node - Node.js CLI tools
- explainai-playground - Interactive demo
Documentation
Requirements
- Node.js ≥18.0.0
- TypeScript ≥5.0.0 (for TypeScript projects)
License
MIT - see LICENSE
Contributing
Contributions welcome! See Contributing Guide
Author
Yash Gupta (@gyash1512)
