npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

recurrent-js

v1.7.4

Published

Various amazingly simple to build and train neural network architectures. The library is an object-oriented neural network approach (baked with Typescript), containing stateless and stateful neural network architectures.

Downloads

50

Readme

recurrent-js

Build Status Build status js-google-style dependency-free

Call For Volunteers: Due to my lack of time, I'm desperately looking for voluntary help. Should you be interested in the training of neural networks (even though you're a newbie) and willing to develop this educational project a little further, please contact me :) There are some points on the agenda, that I'd still like to see implemented to make this project a nice library for abstract educational purposes.

INACTIVE: Due to lack of time and help

The recurrent-js library – Various amazingly simple to build and train neural network architectures. This Library is for educational purposes only. The library is an object-oriented neural network approach (baked with Typescript), containing stateless and stateful neural network architectures. It is a redesigned and extended version of Andrej Karpathy's RecurrentJS library that implements the following:

  • Vanilla Feedforward Neural Network (Net)
  • Deep Recurrent Neural Networks (RNN)
  • Deep Long Short-Term Memory Networks (LSTM)
  • Bonus #1: Deep Feedforward Neural Networks (DNN)
  • Bonus #2: Deep Bayesian Neural Networks (BNN)
  • In fact, the library is more general because it has functionality to construct arbitrary expression graphs over which the library can perform automatic differentiation similar to what you may find in Theano for Python, or in Torch etc. Currently, the code uses this very general functionality to implement RNN/LSTM, but one can build arbitrary Neural Networks and do automatic backprop.

For Production Use

What does the Library has to offer?

The following sections provide an overview of the available Classes and Interfaces. The class names are linked to more detailed descriptions of the specific classes.

Utility Classes:

  • Utils - Collection of Utility functions: Array creation & manipulation, Statistical evaluation methods etc.
  • Mat - Matrix Class holding weights and their derivatives for the neural networks.
  • RandMat - A convenient subclass of Mat. RandMat objects are automatically populated with random values on their creation.
  • MatOps - Class with matrix operations (add, multiply, sigmoid etc.) and their respective derivative functions.
  • Graph - Graph memorizing the sequences of matrix operations and matching their respective derivative functions for backpropagation.
  • NetOpts - Standardized Interface for the initial configuration of all Neural Networks.
  • InnerState - Standardized Interface for stateful networks memorizing the previous state of activations.

Neural Network Classes:

  • stateless:
    • Net - shallow Vanilla Feedforward Neural Network (1 hidden layer).
    • DNN - Deep Feedforward Neural Network.
    • BNN - Deep Bayesian Neural Network.
  • stateful (Still old API!):
    • RNN - Deep Recurrent Neural Network.
    • LSTM - Long Short Term Memory Network.

How to install as dependency

Download available @npm: recurrent-js

Install via command line:

npm install --save recurrent-js@latest

The project directly ships with the transpiled Javascript code. For TypeScript development it also contains Map-files and Declaration-files.

How to import?

The aforementioned classes can be imported from this npm module, e.g.:

import { NetOpts, DNN } from 'recurrent-js';

For JavaScript usage require classes from this npm module as follows:

// NetOpts is an interface (Typescript only), but it gives clues about the required Object-properties (keys)
const DNN = require('recurrent-js').DNN;

How to train?

Training of neural networks is achieved by iteratively reinforcing wanted neural activations or by suppressing unwanted activation paths through adjusting their respective slopes. The training is achieved via an expression Graph, which memorizes the sequence of matrix operations being executed during the forward-pass operation of a neural network. The results of the Matrix operations are contained in Mat-objects, which contain the resulting values (w) and their corresponding derivatives (dw). The Graph-object can be used to calculate the resulting gradient and propagate a loss value back into the memorized sequence of matrix operations. The update of the weights of the neural connections will then lead to supporting wanted neural network activity and suppressing unwanted activation behavior. The described backpropagation can be achieved as follows:

import { Graph, DNN } from 'recurrent-js';

/* define network structure configuration */
const netOpts = {
    architecture: { inputSize: 2, hiddenUnits: [2, 3], outputSize: 3 },
    training: { loss: 1e-11 }
  };

/* instantiate network */
const net = new DNN(netOpts);

/* make it trainable */
net.setTrainability(true);

/** 
 * Perform an iterative training by first forward passing an input
 * and second backward propagating the according target output.
 * You'll receive the squared loss, that gives you a hint of the networks
 * approximation quality.
 * Repeat this action until the quality of the output of the forward pass 
 * suits your needs, or the mean squared error is small enough, e.g. < 1.
 */
do {
  const someInput = [0, 1]; /* an array of intput values */
  const someExpectedOutput = [0, 1, 0]; /* an array of target output */

  const someOutput = net.forward(someInput);
  
  net.backward(someExpectedOutput /* , alpha?: number */);
  const squaredLoss = net.getSquaredLoss(someInput, someExpectedOutput);
} while(squaredLoss > 0.1);
/**
 * --> Keep in mind: you actually want a low MEAN squaredLoss, this is
 * left out in this example, to keep the focus on the important parts
 */

HINT #1: providing an additional custom learning rate (alpha) for the backpropagation can accelerate the training. For further info please consult the respectivetest-examples.spec.ts file.

HINT #2: The Recurrent Neural Network Architectures (RNN, LSTM) are not yet updated to this new training API. Due to my current lack of time, this likely won't change for a while... (unless this repo gets some voluntary help). Please consult the README of the commit v.1.6.2 for the details of the former training style. Thanks!

Should you want to get some deeper insights on "how to train the network", it is recommendable to have a look into the source of the DQN-Solver from the reinforce-js library (learnFromSarsaTuple-Method).

Example Applications

This project is an integral part of the reinforce-js library. As such it is vividly demonstrated in the learning-agents model.

Community Contribution

Everybody is more than welcome to contribute and extend the functionality!

Please feel free to contribute to this project as much as you wish to.

  1. clone from GitHub via git clone https://github.com/mvrahden/recurrent-js.git
  2. cd into the directory and npm install for initialization
  3. Try to npm run test. If everything is green, you're ready to go :sunglasses:

Before triggering a pull-request, please make sure that you've run all the tests via the testing command:

npm run test

This project relies on Visual Studio Codes built-in Typescript linting facilities. It primarily follows the Google TypeScript Style-Guide through the provided tslint-google.json configuration file.

License

As of License-File: MIT