tinygrad
v0.0.1
Published
A JavaScript/TypeScript autograd engine with operator overloading, inspired by micrograd
Maintainers
Readme
TinyGrad
A JavaScript/TypeScript autograd engine with operator overloading, inspired by micrograd.
Features
- 🔥 Automatic Differentiation: Full backpropagation support for scalar values
- ⚡ Operator Overloading: Natural mathematical syntax using JavaScript operator overloading
- 🧠 Neural Networks: Built-in neuron, layer, and MLP implementations
- 📦 Lightweight: Zero dependencies for the core library
- 🎯 TypeScript: Fully typed with excellent IDE support
- 🌐 Universal: Works in browsers and Node.js
Installation
npm install tinygrad
# or
pnpm add tinygrad
# or
yarn add tinygrad
# or
bun add tinygradQuick Start
"use operator overloading";
import { engine, nn } from "tinygrad";
const { Value } = engine;
const { MLP } = nn;
// Scalar operations with automatic differentiation
const a = new Value(2.0);
const b = new Value(-3.0);
const c = new Value(10.0);
const e = a * b;
const d = e + c;
const f = d.relu();
// Compute gradients
f.backward();
console.log(f.data); // 4.0
console.log(a.grad); // -3.0
console.log(b.grad); // 2.0
// Build a neural network
const model = new MLP(3, [4, 4, 1]); // 3 inputs, 2 hidden layers of 4 neurons, 1 output
const x = [
new Value(2.0),
new Value(3.0),
new Value(-1.0)
];
const output = model.call(x);
console.log(output.data); // Forward pass resultAPI Reference
Value
The Value class represents a scalar value with gradient tracking.
Constructor:
new Value(data: number, children?: Value[], _op?: string)Supported Operations:
add(other)or+- Additionsub(other)or-- Subtractionmul(other)or*- Multiplicationdiv(other)or/- Divisionpow(n)or**- Powerneg()or unary-- Negationrelu()- ReLU activation
Methods:
backward()- Compute gradients via backpropagation
Neural Network Modules
Neuron
new Neuron(nin: number, nonlin: boolean = true)Layer
new Layer(nin: number, nout: number, nonlin: boolean = true)MLP (Multi-Layer Perceptron)
new MLP(nin: number, nouts: number[])Methods:
call(x: Value[])- Forward passparameters()- Get all trainable parameterszeroGrad()- Reset gradients to zero
Operator Overloading
TinyGrad uses the unplugin-op-overloading plugin to enable natural mathematical syntax. Add the following to the top of your file:
"use operator overloading";This enables:
const x = new Value(2);
const y = new Value(3);
const z = x * y + x ** 2; // Much cleaner than z = x.mul(y).add(x.pow(2))Training Example
"use operator overloading";
import { engine, nn } from "tinygrad";
const { Value } = engine;
const { MLP } = nn;
// Dataset
const X = [[2, 3, -1], [3, -1, 0.5], [0.5, 1, 1], [1, 1, -1]];
const y = [1, -1, -1, 1]; // targets
const model = new MLP(3, [4, 4, 1]);
// Training loop
for (let i = 0; i < 100; i++) {
// Forward pass
const inputs = X.map(row => row.map(x => new Value(x)));
const scores = inputs.map(x => model.call(x));
// Loss (MSE)
let loss = new Value(0);
for (let j = 0; j < y.length; j++) {
const diff = scores[j] - new Value(y[j]);
loss = loss + diff * diff;
}
// Backward pass
model.zeroGrad();
loss.backward();
// Update (SGD)
const lr = 0.01;
for (const p of model.parameters()) {
p.data -= lr * p.grad;
}
if (i % 10 === 0) {
console.log(`Step ${i}, Loss: ${loss.data}`);
}
}Demo
Check out the interactive demo to see TinyGrad in action with:
- Real-time visualization of training progress
- Decision boundary visualization
- Interactive controls for learning rate and training steps
Development
# Install dependencies
bun install
# Run development server (demo)
bun run dev
# Build library
bun run build:lib
# Type checking
bun run typecheckLicense
MIT
Credits
Inspired by micrograd by Andrej Karpathy.
