@astermind/astermind-community
v3.0.0
Published
AsterMind Community - Complete ELM library with 21+ advanced variants, Pro features (RAG, reranking, summarization), and OmegaSynth synthetic data generation. Free and open-source. Combines all features from AsterMind-ELM, AsterMind-Pro, AsterMind-Premium
Readme
AsterMind-Community
Complete ELM library with 21+ advanced variants, Pro features (RAG, reranking, summarization), and OmegaSynth synthetic data generation. Free and open-source.
AsterMind Community combines all features from AsterMind-ELM, AsterMind-Pro, AsterMind-Premium, and AsterMind-Synth into one unified, free, and open-source package under the MIT license.
🚀 What you can build — and why this is groundbreaking
AsterMind brings instant, tiny, on-device ML to the web. It lets you ship models that train in milliseconds, predict with microsecond latency, and run entirely in the browser — no GPU, no server, no tracking. With Kernel ELMs, Online ELM, DeepELM, and Web Worker offloading, you can create:
- Private, on-device classifiers (language, intent, toxicity, spam) that retrain on user feedback
- Real-time retrieval & reranking with compact embeddings (ELM, KernelELM, Nyström whitening) for search and RAG
- Interactive creative tools (music/drum generators, autocompletes) that respond instantly
- Edge analytics: regressors/classifiers from data that never leaves the page
- Deep ELM chains: stack encoders → embedders → classifiers for powerful pipelines, still tiny and transparent
Why it matters: ELMs give you closed-form training (no heavy SGD), interpretable structure, and tiny memory footprints.
AsterMind modernizes ELM with kernels, online learning, workerized training, robust preprocessing, and deep chaining — making seriously fast ML practical for every web app.
🆕 New in v3.0.0 - Unified Community Edition
Major Release: All features from Elm, Pro, Premium, and Synth are now free and open-source!
21 Advanced ELM Variants — Previously Premium-only, now free:
- Adaptive Online ELM, Forgetting Online ELM, Hierarchical ELM
- Attention-Enhanced ELM, Variational ELM, Time-Series ELM
- Transfer Learning ELM, Graph ELM, Graph Kernel ELM
- Adaptive Kernel ELM, Sparse Kernel ELM, Ensemble Kernel ELM
- Deep Kernel ELM, Robust Kernel ELM, ELM-KELM Cascade
- String Kernel ELM, Convolutional ELM, Recurrent ELM
- Fuzzy ELM, Quantum-Inspired ELM, Tensor Kernel ELM
Pro Features — RAG, Reranking, Summarization, Information Flow Analysis (now free!)
OmegaSynth — Label-conditioned synthetic data generation (now free!)
All Core Features — Kernel ELMs, Online ELM, DeepELM, Web Workers, and more
MIT License — Fully open-source, no license required!
See Releases for full changelog.
📑 Table of Contents
- Introduction
- Features
- Kernel ELMs (KELM)
- Online ELM (OS-ELM)
- DeepELM
- Web Worker Adapter
- Installation
- Usage Examples
- Suggested Experiments
- Why Use AsterMind
- Core API Documentation
- Method Options Reference
- ELMConfig Options
- Prebuilt Modules
- Text Encoding Modules
- UI Binding Utility
- Data Augmentation Utilities
- IO Utilities (Experimental)
- Embedding Store
- Utilities: Matrix & Activations
- Adapters & Chains
- Workers: ELMWorker & ELMWorkerClient
- Example Demos and Scripts
- Experiments and Results
- Documentation
- Releases
- License
🌟 AsterMind: Decentralized ELM Framework Inspired by Nature
Welcome to AsterMind, a modular, decentralized ML framework built around cooperating Extreme Learning Machines (ELMs) that self-train, self-evaluate, and self-repair — like the nervous system of a starfish.
How This ELM Library Differs from a Traditional ELM
This library preserves the core Extreme Learning Machine idea — random hidden layer, nonlinear activation, closed-form output solve — but extends it with:
- Multiple activations (ReLU, LeakyReLU, Sigmoid, Linear, GELU)
- Xavier/Uniform/He initialization
- Dropout on hidden activations
- Sample weighting
- Metrics gate (RMSE, MAE, Accuracy, F1, Cross-Entropy, R²)
- JSON export/import
- Model lifecycle management
- UniversalEncoder for text (char/token)
- Data augmentation utilities
- Chaining (ELMChain) for stacked embeddings
- Weight reuse (simulated fine-tuning)
- Logging utilities
AsterMind is designed for:
- Lightweight, in-browser ML pipelines
- Transparent, interpretable predictions
- Continuous, incremental learning
- Resilient systems with no single point of failure
✨ Features
Core Features
- ✅ Modular Architecture
- ✅ Closed-form training (ridge / pseudoinverse)
- ✅ Activations: relu, leakyrelu, sigmoid, tanh, linear, gelu
- ✅ Initializers: uniform, xavier, he
- ✅ Numeric + Text configs
- ✅ Kernel ELM with Nyström + whitening
- ✅ Online ELM (RLS) with forgetting factor
- ✅ DeepELM (stacked layers)
- ✅ Web Worker adapter
- ✅ Embeddings & Chains for retrieval and deep pipelines
- ✅ JSON import/export
- ✅ Self-governing training
- ✅ Flexible preprocessing
- ✅ Lightweight deployment (ESM + UMD)
- ✅ Retrieval and classification utilities
- ✅ Zero server/GPU — private, on-device ML
Advanced ELM Variants (21 variants, now free!)
- ✅ Adaptive Online ELM, Forgetting Online ELM
- ✅ Hierarchical ELM, Attention-Enhanced ELM, Variational ELM
- ✅ Time-Series ELM, Transfer Learning ELM
- ✅ Graph ELM, Graph Kernel ELM
- ✅ Adaptive/Sparse/Ensemble/Deep/Robust Kernel ELM
- ✅ ELM-KELM Cascade, String Kernel ELM
- ✅ Convolutional ELM, Recurrent ELM
- ✅ Fuzzy ELM, Quantum-Inspired ELM, Tensor Kernel ELM
Pro Features (now free!)
- ✅ RAG Pipeline (Retrieval-Augmented Generation)
- ✅ Reranking
- ✅ Summarization
- ✅ Information Flow Analysis
- ✅ Transfer Entropy
Synthetic Data Generation (now free!)
- ✅ OmegaSynth - Label-conditioned synthetic data generation
- ✅ Multiple generation modes (retrieval, ELM, hybrid, exact, perfect)
- ✅ Pattern correction and sequence context
- ✅ Character embeddings
🧠 Kernel ELMs (KELM)
Supports Exact and Nyström modes with RBF/Linear/Poly/Laplacian/Custom kernels.
Includes whitened Nyström (persisted whitener for inference parity).
import { KernelELM, KernelRegistry } from '@astermind/astermind-elm';
const kelm = new KernelELM({
outputDim: Y[0].length,
kernel: { type: 'rbf', gamma: 1 / X[0].length },
mode: 'nystrom',
nystrom: { m: 256, strategy: 'kmeans++', whiten: true },
ridgeLambda: 1e-2,
});
kelm.fit(X, Y);🔁 Online ELM (OS-ELM)
Stream updates via Recursive Least Squares (RLS) with optional forgetting factor. Supports He/Xavier/Uniform initializers.
import { OnlineELM } from '@astermind/astermind-elm';
const ol = new OnlineELM({ inputDim: D, outputDim: K, hiddenUnits: 256 });
ol.init(X0, Y0);
ol.update(Xt, Yt);
ol.predictProbaFromVectors(Xq);Notes
forgettingFactorcontrols how fast older observations decay (default 1.0).- Two natural embedding modes: hidden (activations) or logits (pre-softmax). Use with
ELMAdapter(see below).
🌊 DeepELM
Stack multiple ELM layers for deep nonlinear embeddings and an optional top ELM classifier.
import { DeepELM } from '@astermind/astermind-elm';
const deep = new DeepELM({
inputDim: D,
layers: [{ hiddenUnits: 128 }, { hiddenUnits: 64 }],
numClasses: K
});
// 1) Unsupervised layer-wise training (autoencoders Y=X)
const X_L = deep.fitAutoencoders(X);
// 2) Supervised head (ELM) on last layer features
deep.fitClassifier(X_L, Y);
// 3) Predict
const probs = deep.predictProbaFromVectors(Xq);JSON I/OtoJSON() and fromJSON() persist the full stack (AEs + classifier).
🧵 Web Worker Adapter
Move heavy ops off the main thread. Provides ELMWorker + ELMWorkerClient for RPC-style training/prediction with progress events.
- Initialize with
initELM(config)orinitOnlineELM(config) - Train via
train/trainFromData/fit/update - Predict via
predict,predictFromVector, orpredictLogits - Subscribe to progress callbacks per call
See Workers for full API.
🚀 Installation
NPM (scoped package):
npm install @astermind/astermind-community
# or
pnpm add @astermind/astermind-community
# or
yarn add @astermind/astermind-communityCDN / <script> (UMD global astermind):
<!-- jsDelivr -->
<script src="https://cdn.jsdelivr.net/npm/@astermind/astermind-community/dist/astermind.umd.js"></script>
<!-- or unpkg -->
<script src="https://unpkg.com/@astermind/astermind-community/dist/astermind.umd.js"></script>
<script>
const { ELM, KernelELM, AdaptiveOnlineELM, OmegaSynth } = window.astermind;
</script>Repository:
- GitHub: https://github.com/infiniteCrank/AsterMind-Community
- NPM: https://www.npmjs.com/package/@astermind/astermind-community
Migration from old packages:
- See Migration Guide for details
- All old packages (
@astermind/astermind-elm,@astermind/astermind-pro,@astermind/astermind-premium,@astermind/astermind-synthetic-data) are deprecated - Simply install
@astermind/astermind-communityand update your imports - no license required!
🛠️ Usage Examples
Basic ELM Classifier
import { ELM } from "@astermind/astermind-community";
const config = { categories: ['English', 'French'], hiddenUnits: 128 };
const elm = new ELM(config);
// Load or train logic here
const results = elm.predict("bonjour");
console.log(results);Advanced ELM Variants (Now Free!):
import { AdaptiveOnlineELM, HierarchicalELM, TimeSeriesELM } from "@astermind/astermind-community";
// Adaptive Online ELM - dynamically adjusts hidden units
const adaptive = new AdaptiveOnlineELM({
categories: ['class1', 'class2'],
initialHiddenUnits: 128
});
// Hierarchical ELM - multi-level classification
const hierarchical = new HierarchicalELM({
hierarchy: { 'root': ['animal', 'plant'], 'animal': ['mammal', 'bird'] },
rootCategories: ['root']
});
// Time-Series ELM - specialized for sequential data
const timeSeries = new TimeSeriesELM({
categories: ['trend_up', 'trend_down', 'stable'],
sequenceLength: 10
});Synthetic Data Generation (Now Free!):
import { OmegaSynth } from "@astermind/astermind-community";
const synth = new OmegaSynth({
mode: 'hybrid', // or 'elm', 'exact', 'retrieval', 'perfect'
maxLength: 32
});
await synth.train(dataset);
const generated = await synth.generate('label', 10);CommonJS / Node:
const { ELM, AdaptiveOnlineELM, OmegaSynth } = require("@astermind/astermind-community");Kernel ELM / DeepELM: see above examples.
🧪 Suggested Experiments
- Compare retrieval performance with Sentence-BERT and TFIDF.
- Experiment with activations and token vs char encoding.
- Deploy in-browser retraining workflows.
🌿 Why Use AsterMind?
Because you can build AI systems that:
- Are decentralized.
- Self-heal and retrain independently.
- Run in the browser.
- Are transparent and interpretable.
📚 Core API Documentation
ELM
train,trainFromData,predict,predictFromVector,getEmbedding,predictLogitsFromVectors, JSON I/O, metricsloadModelFromJSON,saveModelAsJSONFile- Evaluation: RMSE, MAE, Accuracy, F1, Cross-Entropy, R²
- Config highlights:
ridgeLambda,weightInit(uniform|xavier|he),seed
OnlineELM
init,update,fit,predictLogitsFromVectors,predictProbaFromVectors, embeddings (hidden/logits), JSON I/O- Config highlights:
inputDim,outputDim,hiddenUnits,activation,ridgeLambda,forgettingFactor
KernelELM
fit,predictProbaFromVectors,getEmbedding, JSON I/Omode: 'exact' | 'nystrom', kernels:rbf | linear | poly | laplacian | custom
DeepELM
fitAutoencoders(X),transform(X),fitClassifier(X_L, Y),predictProbaFromVectors(X)toJSON(),fromJSON()for full-pipeline persistence
ELMChain
- sequential embeddings through multiple encoders
TFIDFVectorizer
vectorize,vectorizeAll
KNN
find(queryVec, dataset, k, topX, metric)
📘 Method Options Reference
train(augmentationOptions?, weights?)
augmentationOptions:{ suffixes, prefixes, includeNoise }weights: sample weights
trainFromData(X, Y, options?)
X: Input matrixY: Label matrix or one-hotoptions:{ reuseWeights, weights }
predict(text, topK)
text: stringtopK: number of predictions
predictFromVector(vector, topK)
vector: numerictopK: number of predictions
saveModelAsJSONFile(filename?)
filename: optional file name
⚙️ ELMConfig Options Reference
| Option | Type | Description |
| -------------------- | ---------- | ------------------------------------------------------------- |
| categories | string[] | List of labels the model should classify. (Required) |
| hiddenUnits | number | Number of hidden layer units (default: 50). |
| maxLen | number | Max length of input sequences (default: 30). |
| activation | string | Activation function (relu, tanh, etc.). |
| encoder | any | Custom UniversalEncoder instance (optional). |
| charSet | string | Character set used for encoding. |
| useTokenizer | boolean | Use token-level encoding. |
| tokenizerDelimiter | RegExp | Tokenizer regex. |
| exportFileName | string | Filename to export JSON. |
| metrics | object | Thresholds (rmse, mae, accuracy, etc.). |
| log | object | Logging config. |
| dropout | number | Dropout rate. |
| weightInit | string | Initializer. (uniform | xavier | he) |
| ridgeLambda | number | Ridge penalty for closed-form solve. |
| seed | number | PRNG seed for reproducibility. |
🧩 Prebuilt Modules and Custom Modules
Includes: AutoComplete, EncoderELM, CharacterLangEncoderELM, FeatureCombinerELM, ConfidenceClassifierELM, IntentClassifier, LanguageClassifier, VotingClassifierELM, RefinerELM.
Each exposes .train(), .predict(), .loadModelFromJSON(), .saveModelAsJSONFile(), .encode().
Custom modules can be built on top.
✨ Text Encoding Modules
Includes TextEncoder, Tokenizer, UniversalEncoder.
Supports char-level & token-level, normalization, n-grams.
🖥️ UI Binding Utility
bindAutocompleteUI(model, inputElement, outputElement, topK) helper.
Binds model predictions to live HTML input.
✨ Data Augmentation Utilities
Augment with prefixes, suffixes, noise.
Example: Augment.generateVariants("hello", "abc", { suffixes:["world"], includeNoise:true }).
⚠️ IO Utilities (Experimental)
JSON/CSV/TSV import/export, schema inference.
Experimental and may be unstable.
🧰 Embedding Store
Lightweight vector store with cosine/dot/euclidean KNN, unit-norm storage, ring buffer capacity.
Usage
import { EmbeddingStore } from '@astermind/astermind-elm';
const store = new EmbeddingStore({ capacity: 5000, normalize: true });
store.add({ id: 'doc1', vector: [/* ... */], meta: { title: 'Hello' } });
const hits = store.query({ vector: q, k: 10, metric: 'cosine' });🔧 Utilities: Matrix & Activations
Matrix – internal linear algebra utilities (multiply, transpose, addRegularization, solveCholesky, etc.).
Activations – relu, leakyrelu, sigmoid, tanh, linear, gelu, plus softmax, derivatives, and helpers (get, getDerivative, getPair).
🔗 Adapters & Chains
ELMAdapter wraps an ELM or OnlineELM to behave like an encoder for ELMChain:
import { ELMAdapter, wrapELM, wrapOnlineELM } from '@astermind/astermind-elm';
const enc1 = wrapELM(elm); // uses elm.getEmbedding(X)
const enc2 = wrapOnlineELM(online, { mode: 'logits' }); // 'hidden' or 'logits'
const chain = new ELMChain([enc1, enc2], { normalizeFinal: true });
const Z = chain.getEmbedding(X); // stacked embeddings🧱 Workers: ELMWorker & ELMWorkerClient
ELMWorker (inside a Web Worker) exposes a tolerant RPC surface:
- lifecycle:
initELM,initOnlineELM,dispose,getKind,setVerbose - training:
train,fit,update,trainFromData(all routed appropriately) - prediction:
predict,predictFromVector,predictLogits - progress events:
{ type:'progress', phase, pct }during training
ELMWorkerClient (on the main thread) is a thin promise-based RPC client:
import { ELMWorkerClient } from '@astermind/astermind-elm/worker';
const client = new ELMWorkerClient(new Worker(new URL('./ELMWorker.js', import.meta.url)));
await client.initELM({ categories:['A','B'], hiddenUnits:128 });
await client.elmTrain({}, (p) => console.log(p.phase, p.pct));
const preds = await client.elmPredict('bonjour', 5);🧪 Example Demos and Scripts
Run with npm run dev:* (autocomplete, lang, chain, news).
Fully in-browser.
🧪 Experiments and Results
Includes dropout tuning, hybrid retrieval, ensemble distillation, multi-level pipelines.
Results reported (Recall@1, Recall@5, MRR).
📚 Documentation
AsterMind ELM includes comprehensive documentation to help you get started and master the library:
Getting Started
Quick Start Tutorial — Complete step-by-step guide covering all major features with practical examples
- Basic ELM, Kernel ELM, Online ELM, DeepELM
- Embeddings, ELM Chains, Web Workers
- Pre-built modules, model persistence
- Advanced features and troubleshooting
AsterMind ELM Overview — High-level overview of what AsterMind ELM is and why tiny neural networks matter
- Core capabilities (classification, regression, embeddings, online learning)
- The AsterMind ecosystem
- Technical architecture overview
Implementation & Integration
Implementation Models — Guide to different ways of implementing AsterMind
- SDK/Library Implementation: Integrating AsterMind into your applications
- Standalone Applications: Using pre-built example applications
- Service Engagement: Professional services for custom implementation
- How to choose the right approach for your needs
Technical Requirements — System requirements for different platforms
- Windows, Linux, and macOS requirements
- Browser compatibility
- Development and runtime requirements
- Troubleshooting common issues
Developer Resources
Code Walkthrough — Detailed code walkthrough for presentations and deep dives
- Entry points and exports
- Core architecture and configuration system
- Main ELM class implementation
- Training and prediction flows
- Key code snippets with line numbers
Data Requirements — Guide to data requirements for training models
- Minimum viable data sizes
- Recommendations for better generalization
- Data collection strategies
- ELM-specific considerations
Additional Resources
Examples Directory — Working demo applications
- Language classification
- Autocomplete chains
- News classification
- Music genre detection
- And more...
Node Examples — Advanced Node.js examples
- Two-stage retrieval systems
- TF-IDF integration
- DeepELM + KernelELM retrieval
- Experimental architectures
Legal Information — Licensing, patents, and legal notices
Documentation Quick Links
| Document | Purpose | Audience | |----------|---------|----------| | Quick Start Tutorial | Learn how to use all features | Beginners | | Overview | Understand what AsterMind is | Everyone | | Implementation Models | Choose integration approach | Decision makers, developers | | Technical Requirements | System setup and requirements | DevOps, developers | | Code Walkthrough | Deep dive into code structure | Developers, presenters | | Data Requirements | Training data guidelines | ML practitioners |
📦 Releases
v2.1.0 — 2026-09-19
New features: Kernel ELM, Nyström whitening, OnlineELM, DeepELM, Worker adapter, EmbeddingStore 2.0, activations linear/gelu, config split.
Fixes: Xavier init, encoder guards, dropout scaling.
Breaking: Config now NumericConfig|TextConfig.
📄 License
MIT License
All features are now free and open-source! This package combines all features from the previous AsterMind packages (ELM, Pro, Premium, Synth) into one unified community edition under the MIT license. No license tokens or subscriptions required.
“AsterMind doesn’t just mimic a brain—it functions more like a starfish: fully decentralized, self-evaluating, and self-repairing.”
