npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

catbrain

v0.1.33

Published

GPU accelerated neural networks made simple for Javascript

Readme

CatBrain

GPU accelerated neural networks made simple for Javascript, influenced by Brain.js.

Setup

Install through npm:

npm install catbrain

Note: Be sure to have Python (and VS build tools if you are on Windows) already installed or else gpu.js and its deps can not be installed. If things break, try an older Python version and point to that version specifically:

export npm_config_python=/path/to/executable/python

or if you are on Windows:

set NODE_GYP_FORCE_PYTHON=/path/to/executable/python.exe

If the issue still persists, try other Node/Python/VS versions (Node18, Python 3.9, and VS2022 should work fine) and check gpu.js installation requirements.

Tutorial

Here is how to create, train, and run a neural net using CatBrain. All the options and config are shown as comments.

const { CatBrain } = require("catbrain");

// Create a neural network
const neuralNetwork = new CatBrain({
    // Init layers with their size, the first and last are input and output layers
    layers: [2, 3, 1],

    // Optional

    // Training config
    learningRate: 0.02, // Learning rate, default is 0.01
    decayRate: 0.9999, // Learning decay rate for each iteration, default is 1
    shuffle: true, // Choose whether to shuffle the dataset, default is true

    // Momentum optimizer
    momentum: 0.2, // Momentum constant, default is 0.1
    dampening: 0.2, // Momentum dampening, default is 0.1
    nesterov: true, // Enable Nesterov Accelerated Gradient, default is false
    
    // Activation config
    activation: "relu", // sigmoid/tanh/relu/leakyRelu/swish/mish/softplus/linear, default is relu
    outputActivation: "sigmoid", // Activation at output layer, default is sigmoid
    leakyReluAlpha: 0.01, // Alpha of leaky relu if you use it, default is 0.01
    reluClip: 5, // Relu clipping, applied in activation functions reaching infinity, default is 5
    // Weight init function, default depends on what activation is used (check ./src/rand.ts)
    // Options: xavierUniform, xavierNormal, heUniform, heNormal, lecunUniform, lecunNormal
    // There is also "basicUniform" which initializes with random numbers from 0 to 1
    weightInit: "heNormal",

    // Options to load existing models, randomly initialized depends on activation if not provided
    // Though, do note that biases are initialized as 0
    // weights: ArrayLike<number>[][],
    // biases: ArrayLike<number>[],
    // deltas: ArrayLike<number>[][], // Load this if you want to resume training

    // enableGPU: false, // Whether to use GPU as default for all operations, default is false
    // gpuOptions: {} // gpu.js options, this will be passed to the GPU constructor
    // Do note that this is heavily in-dev and not recommended for use at all currently
});

// Train
neuralNetwork.train(
    // Number of iterations
    100000,
    // Dataset as an array
    [
        // A data object with expected outputs of inputs 
        { inputs: [0, 0], outputs: [0] },
        { inputs: [0, 1], outputs: [1] },
        { inputs: [1, 0], outputs: [1] },
        { inputs: [1, 1], outputs: [0] }
    ]
    // You can also pass in optional training config as well:
    // , {
    //     learningRate: 0.02, // Will use original learning rate if not provided
    //     decayRate: 0.9999, // Will use original decay rate if not provided
    //     momentum: 0.1, // Will use original momentum if not provided
    //     dampening: 0.1, // Will use original dampening if not provided
    //     nesterov: 0.1, // Will use original nesterov if not provided
    //     shuffle: true, // Will use original shuffle option if not provided
    //     // A function called before every iteration
    //     callback: (status) => {
    //         console.log(status.iteration)
    //     },
    //     enableGPU: true // Default is false if not specified
    // }
);

// Run the neural net with our own input
console.log(neuralNetwork.feedForward([1, 0]));
// You can run it with GPU enabled:
// console.log(neuralNetwork.feedForward([1, 0], { enableGPU: true }));

// Export all configurations to JSON
// const netConfig = neuralNetwork.toJSON();
// Load the neural net again
// const net = new Catbrain(JSON.parse(netConfig));

Examples

There are several demos available in ./examples:

Todos

Currently what I have in mind are:

  • Option to configure each layer independently to do more than just MLPs.
  • Proper GPU acceleration, possibly with CUDA/ROCm and Node C++ bindings.
  • More GD opimizers or different optimization algos.
  • More pre-built neural network architectures.
  • Code refactoring, test cases, and optimization.
  • Minor utilities for convenience.
  • More activation functions.

Copyrights and License

Copyrights © 2025 Nguyen Phu Minh.

This project is licensed under the Apache 2.0 License.