npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@mni-ml/framework

v0.3.2

Published

A TypeScript ML framework with Rust native backends (CPU, CUDA, WebGPU) — autograd, tensors, and neural network training at GPU speed.

Readme

@mni-ml/framework

A TypeScript ML framework with Rust native backends (CPU, CUDA, WebGPU) providing autograd, tensor operations, and neural network training at GPU speed.

Features

  • Automatic differentiation -- full backward pass through an autograd tape
  • GPU acceleration -- CUDA (NVIDIA) and WebGPU (Metal/Vulkan/DX12) backends
  • PyTorch-like API -- familiar Tensor, Module, Parameter, optimizer classes
  • Comprehensive ops -- elementwise, matmul, conv1d/conv2d, pooling, reductions, activations
  • Built-in modules -- Linear, Conv1d, Conv2d, Embedding, ReLU, Sigmoid, Tanh
  • Optimizers -- SGD and Adam (AdamW) with learning rate scheduling

Installation

npm install @mni-ml/framework

Quick Start

import { Tensor, Linear, Adam, Parameter, softmax, crossEntropyLoss } from '@mni-ml/framework';

// Create tensors
const x = Tensor.rand([32, 10]);
const targets = [[0], [1], [2], /* ... */];

// Build a model
const layer1 = new Linear(10, 64);
const layer2 = new Linear(64, 3);

// Forward pass
let h = layer1.forward(x).relu();
let logits = layer2.forward(h);
let loss = crossEntropyLoss(logits, targets);

// Backward pass
loss.backward();

// Optimize
const params = [...layer1.parameters(), ...layer2.parameters()];
const optimizer = new Adam(params, 0.001);
optimizer.step();
optimizer.zeroGrad();

API Reference

Tensor

// Creation
Tensor.zeros([2, 3])           // zero-filled
Tensor.ones([2, 3])            // one-filled
Tensor.rand([2, 3])            // uniform [0, 1)
Tensor.randn([2, 3])           // normal distribution
Tensor.fromFloat32(data, shape) // from Float32Array

// Arithmetic (with autograd)
a.add(b)       a.add(2.0)     // addition
a.sub(b)                       // subtraction
a.mul(b)       a.mul(2.0)     // multiplication
a.div(b)       a.div(2.0)     // division
a.neg()                        // negation
a.exp()        a.log()        // exponentials
a.pow(2)                       // power

// Activations
a.relu()       a.sigmoid()

// Reductions
a.sum(dim)     a.sum()        // sum along dim or all
a.mean(dim)    a.mean()       // mean along dim or all
a.max(dim)                     // max along dim

// Comparisons (returns 0/1 tensor, no gradient)
a.lt(b)        a.gt(b)        a.eq(b)
a.isClose(b, tol)

// Layout
a.view(2, 3)                   // reshape
a.permute(1, 0)                // transpose
a.contiguous()                 // ensure contiguous memory

// Linear algebra
a.matmul(b)                    // matrix multiplication

// Convolution
a.conv1d(weight, stride, padding)
a.conv2d(weight, stride, padding)

// Utilities
a.clone()      a.detach()     // copy / detach from graph
a.toString()                   // debug string
a.backward()                   // run backward pass
a.setRequiresGrad(true)        // enable gradient tracking

Neural Network Modules

import { Linear, Conv1d, Conv2d, ReLU, Sigmoid, Embedding } from '@mni-ml/framework';

const linear = new Linear(inputSize, outputSize);
const conv1d = new Conv1d(inChannels, outChannels, kernelSize, stride, padding);
const conv2d = new Conv2d(inChannels, outChannels, kernelSize, stride, padding);
const embedding = new Embedding(vocabSize, embeddingDim);

// Use in forward pass
const out = linear.forward(input);

Functional Operations

import { softmax, gelu, layerNorm, crossEntropyLoss, dropout,
         avgpool2d, maxpool2d, tile } from '@mni-ml/framework';

const sm = softmax(logits, dim);
const g = gelu(x);
const ln = layerNorm(x, gamma, beta, eps);
const loss = crossEntropyLoss(logits, targets);
const dropped = dropout(x, rate, training);
const pooled = avgpool2d(x, kernelH, kernelW);
const maxPooled = maxpool2d(x, kernelH, kernelW);
const tiled = tile(x, [2, 1]);

Optimizers

import { Adam, SGD } from '@mni-ml/framework';

const optimizer = new Adam(parameters, lr, beta1, beta2, eps, weightDecay);
// or
const optimizer = new SGD(parameters, lr);

optimizer.step();      // update parameters
optimizer.zeroGrad();  // clear gradients

Backend Architecture

TypeScript API (tensor.ts, nn.ts, optimizer.ts)
    │
    └─→ N-API Bridge (lib.rs)
            │
            ├─→ CPU Backend (Vec<f32>, pure Rust)
            ├─→ CUDA Backend (cudarc + .cu kernels)
            └─→ WebGPU Backend (wgpu + .wgsl shaders)

All three backends share the same autograd tape and tensor store. Feature flags are mutually exclusive at compile time:

  • cpu -- default, no GPU required
  • cuda -- NVIDIA GPU via CUDA
  • webgpu -- any GPU via wgpu (Metal, Vulkan, DX12)

Building from source

Only needed if you are contributing or want a custom build. Requires Rust.

# CPU (default)
npm run build:native

# CUDA (requires CUDA toolkit)
npm run build:native:cuda

# WebGPU
npm run build:native:webgpu

# Build TypeScript
npm run build

License

MIT