npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

scs-neural

v0.3.0

Published

A WebGPU-powered neural network library built on simple-compute-shaders.

Readme

scs-neural

A high-performance, memory-efficient neural network library powered by WebGPU.

🚀 Overview

scs-neural is built for speed and efficiency, leveraging the parallel processing power of WebGPU. It is designed with a "performance-first" philosophy, prioritizing low-level buffer management and optimized compute shaders over high-level developer experience (DX).

Key Features

  • WebGPU Accelerated: All computations (forward pass, backpropagation, genetic evaluation) run on the GPU.
  • CNN Support: Includes optimized kernels for 2D Convolution and Max Pooling.
  • Neuroevolution for CNNs: Highly parallelized Genetic Algorithm (GA) evaluation supporting Dense, Conv2D, and Flatten layers.
  • Memory Efficient: Uses manual buffer swapping (ping-ponging) instead of high-level tensor abstractions to minimize memory overhead and data transfers.
  • Flexible Training: Supports both standard backpropagation (SGD) and large-scale population evaluation.
  • Direct Shader Control: Built on top of simple-compute-shaders for direct WGSL control.

🛠 Core Philosophy: Performance over DX

This library does not use standard tensor libraries. Instead:

  1. Buffer Swapping: We use a "ping-pong" technique where two buffers are swapped between layers to avoid unnecessary allocations during inference and training.
  2. Direct Memory Access: Inputs and outputs are handled as flat Float32Array buffers, mapped directly to GPU memory.
  3. Minimized Host-Device Sync: Training loops and genetic evaluations are designed to keep data on the GPU as much as possible, reducing the bottleneck of CPU-GPU communication.

📦 Installation

npm install scs-neural

📖 Usage

1. Initialization

Define your network architecture using the layers configuration.

Simple FFNN (Feed-Forward Neural Network)

import { NeuralNetwork, ActivationType, LayerType } from "scs-neural";

const nn = new NeuralNetwork({
    layers: [
        { type: LayerType.INPUT, shape: [3] },
        { type: LayerType.DENSE, size: 12 },
        { type: LayerType.DENSE, size: 3 },
    ],
    trainingBatchSize: 10,
    testingBatchSize: 1,
    hiddenActivationType: ActivationType.RELU,
    outputActivationType: ActivationType.LINEAR,
});

await nn.initialize("xavier");

CNN (Convolutional Neural Network)

const nn = new NeuralNetwork({
    layers: [
        { type: LayerType.INPUT, shape: [28, 28, 1] }, // [height, width, channels]
        { type: LayerType.CONV2D, kernelSize: 3, filters: 16, padding: 1, activation: ActivationType.RELU },
        { type: LayerType.MAXPOOL2D, poolSize: 2 },
        { type: LayerType.FLATTEN },
        { type: LayerType.DENSE, size: 10 },
    ],
    trainingBatchSize: 32,
});

await nn.initialize("he");

⚙️ Configuration Details

Layer Types

| Layer Type | Properties | Description | |------------|------------|-------------| | INPUT | shape: number[] | Input dimensions, e.g., [size] or [h, w, c]. | | DENSE | size: number, activation? | Fully connected layer. | | CONV2D | kernelSize, filters, stride?, padding?, activation? | 2D Convolutional layer. | | MAXPOOL2D| poolSize, stride? | 2D Max Pooling layer. | | FLATTEN | (none) | Flattens multi-dimensional input for Dense layers. |

Activation Types

  • RELU, SIGMOID, TANH, SOFTMAX, LINEAR

Initialization Methods

  • xavier: Good for Sigmoid/Tanh.
  • he: Recommended for ReLU.
  • uniform: Random values between -0.1 and 0.1.
  • zero: Initializes all parameters to 0.

2. Forward Pass (Inference)

Run a single inference pass. The library handles the internal ping-ponging of buffers.

const input = new Float32Array([...]); // Flat array matching input shape
const output = await nn.forwardPass(input);
console.log("Network Output:", output);

3. Training with Backpropagation

Train the network using supervised learning. The train method handles batching and shuffling.

await nn.train({
    inputActivations: [...], // Array of Float32Arrays
    targetActivations: [...],
    epochs: 10,
    learningRate: 0.001,
    progressCallback: (epoch, loss) => {
        console.log(`Epoch ${epoch} - Loss: ${loss}`);
    }
});

4. Genetic Algorithms (Parallel Evaluation)

scs-neural excels at evaluating large populations in parallel, which is ideal for reinforcement learning or neuroevolution.

// populationWeights[layerIndex][genomeIndex] = Float32Array
const { activations } = await nn.evaluatePopulation({
    populationSize: 100,
    batchSize: 1,
    weights: populationWeights,
    biases: populationBiases,
    inputs: populationInputs, // Array of Float32Arrays [populationSize]
    returnActivations: true
});

GA Implementation Notes

  • Weight Structure: Weights and biases must be provided as a 2D array: [layerIndex][genomeIndex]. layerIndex corresponds to the layer position in your architecture (skipping the input layer).
  • Supported Layers: Currently supports DENSE, CONV2D, and FLATTEN for genetic evaluation.
  • Limitations: MAXPOOL2D is currently only supported for standard backpropagation training and is not yet available for genetic evaluation.

🧪 Examples

This repository contains several examples showcasing the library:

  • Ant Warfare (New): A complex simulation of two competing ant colonies evolving strategies via CNN-based neuroevolution.
  • Flappy Bird Genetic Algorithm: Evolving simple neural networks to play Flappy Bird.
  • Color Inversion: Standard supervised training to invert RGB colors.

To run the examples locally:

npm install
npm run dev

Then open the URL provided by Vite.

🏗 Architecture

The core library is located in src/.

  • Shaders: Located in src/shaders/, these WGSL files handle the core mathematical operations including Conv2D and MaxPool.
  • Buffer Management: The library maintains dedicated StorageBuffer objects for weights, biases, gradients, and activations for every layer to avoid runtime allocations and garbage collection overhead.

📄 License

ISC