mashtishk
v2.0.0
Published
A complete deep learning library for JavaScript — Dense, CNN, RNN/LSTM/GRU, Attention, GPU acceleration, and ESP32 deployment. By CavyIoT.
Maintainers
Readme
mashtishk
A complete deep learning library for JavaScript. Dense networks, CNN, RNN/LSTM/GRU, Attention, GPU acceleration, and ESP32/IoT deployment — all in one zero-dependency package.
npm install mashtishkArchitecture
Your Code
│
▼
NeuralNetworkV2 / SequentialModel ← src/nn/
│
▼
Layer (Dense, Conv2D, LSTM, GRU, …) ← src/nn/ + src/layers/
│
├── Activations (relu, sigmoid, …) ← src/activations/
├── Losses (MSE, BCE, CCE, …) ← src/losses/
└── Optimizers (Adam, SGD, …) ← src/optimizers/
│
▼
Tensor / Matrix engine ← src/core/
│
▼
CPU ──or── GPU backend (gpu.js) ← src/gpu/
│
▼
Node.js ──or── Browser (dist/mashtishk.min.js)
│
▼
ESP32 / MCU (via nn.toCCode()) ← src/nn/NeuralNetwork.jsQuick Start
const { NeuralNetworkV2 } = require('mashtishk');
// 1. Build
const nn = new NeuralNetworkV2({
layers: [
{ inputSize: 2, units: 4, activation: 'relu' },
{ units: 1, activation: 'sigmoid' }
],
learningRate: 0.01,
epochs: 500,
verbose: false
});
// 2. Compile
nn.compile({ optimizer: 'adam', loss: 'binaryCrossEntropy' });
// 3. Fit
const data = [
{ input: [0, 0], output: [0] },
{ input: [1, 0], output: [1] },
{ input: [0, 1], output: [1] },
{ input: [1, 1], output: [0] } // XOR
];
const history = nn.fit(data);
// 4. Predict
console.log(nn.predict([0, 1])); // → [0.9978]
console.log(nn.predict([1, 1])); // → [0.0021]Running Examples
npm run example:hello # learn to double a number
npm run example:or # OR gate — single neuron
npm run example:xor # XOR — hidden layer required
npm run example:iris # 3-class iris dataset
npm run example:spam # binary spam classifier
npm run example:houses # house price prediction
npm run example:sensor # time-series sensor forecasting
npm run example:iot # anomaly detector + ESP32 C exportAPI Reference
new NeuralNetworkV2(config)
| Option | Type | Default | Description |
|---|---|---|---|
| layers | Array | required | Layer definitions |
| learningRate | number | 0.01 | Step size for weight updates |
| epochs | number | 100 | Maximum training iterations |
| batchSize | number | 32 | Samples per gradient update |
| validationSplit | number | 0.1 | Fraction held out for validation |
| earlyStoppingPatience | number | 10 | Stop after N epochs without improvement |
| gradientClip | number | 1.0 | Maximum gradient norm |
| verbose | boolean | true | Print epoch logs |
Layer definition:
{ inputSize: 4, units: 16, activation: 'relu', init: 'he', dropout: 0.2, batchNorm: true }inputSize required only on the first layer.
Methods
| Method | Description |
|---|---|
| nn.compile({ optimizer, loss }) | Wire loss and optimizer |
| nn.fit(data) | Train — returns history { loss, valLoss, accuracy } |
| nn.predict(input) | Forward pass on one sample |
| nn.evaluate(data) | Returns { loss, accuracy } |
| nn.summary() | Print architecture and parameter count |
| nn.save(path) | Save weights to JSON |
| nn.load(path) | Load weights from JSON |
| nn.toJSON() | Serialize to plain object |
| NeuralNetworkV2.fromJSON(obj) | Restore from serialized object |
| nn.toCCode(fnName) | Export as C function for ESP32/MCU |
| nn.freeze(layerIndex) | Freeze layer weights (transfer learning) |
Activations
relu · sigmoid · tanh · softmax · linear · leakyRelu · elu · swish
Losses
mse · mae · binaryCrossEntropy · categoricalCrossEntropy · huber · cosineProximity
Optimizers
adam · adamw · sgd · momentum · nag · rmsprop · adagrad · adadelta · adamax · nadam
Advanced Layers
const {
Conv2D, MaxPool2D, AvgPool2D, Flatten, // CNN
RNN, LSTM, GRU, Bidirectional, // Recurrent
Attention, MultiHeadAttention, // Transformer
PositionalEncoding, Embedding, // NLP
LayerNorm, SequentialModel
} = require('mashtishk');DataUtils
DataUtils.normalise(array)
DataUtils.normaliseBatch(data)
DataUtils.trainTestSplit(data, 0.8) // → { train, test }
DataUtils.oneHot(labels) // → { data, classes }
DataUtils.kFold(data, k) // → array of k foldsBrowser Usage
npm install --save-dev browserify terser
npm run build
# → dist/mashtishk.js (633 KB)
# → dist/mashtishk.min.js (305 KB)<script src="dist/mashtishk.min.js"></script>
<script>
const nn = new NeuralNetworkV2({ ... });
nn.compile({ optimizer: 'adam', loss: 'mse' });
nn.fit(data);
</script>All classes become globals after the script tag:
NeuralNetworkV2, DataUtils, Matrix, Tensor, Adam, LSTM, Conv2D …
ESP32 / IoT
nn.fit(sensorData);
const fs = require('fs');
fs.writeFileSync('model.h', nn.toCCode('predict'));Embeds trained weights as C float arrays ready for an Arduino/ESP32 sketch.
See examples/iot/esp32-export.js for the full workflow.
GPU
npm install gpu.js # optionalDetected and used automatically. Falls back to CPU silently if unavailable.
v1 Compatibility
const { NeuralNetwork } = require('mashtishk');
const nn = NeuralNetwork({ inputs: 2, hiddenLayers: 8, outputs: 1 });
nn.trainNetwork(data);
nn.predict([1, 0]);License
MIT © CavyIoT Private Limited — see LICENSE
Contributing
See CONTRIBUTING.md
