npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

fit-tnt

v9.0.0

Published

TNT - A Least Squares Iterative Solver.

Downloads

22

Readme

TNT

NPM version build status Test coverage npm download

Finds $x$ in $A x = b$ by least-squares. Supports multiple right-hand-sides.

The method is based off the TNT paper by J. M. Myre et al.

  • Speed. Best when these apply:

    • $\large\frac{\mathrm{rows}}{\mathrm{cols}} \geq 1$.
    • Data columns $\geq 10$. But it's worth trying in any case.
  • Accuracy: it's frequently as accurate as QR or PseudoInverse but it will have larger error (normally still acceptable) with tricky matrices.

For speed, see comparison here.

For calculations with non-zero intercept, remember to push a $1$ to each row. The coefficient will be the last item in XBest.

A more thorough webpage to compare speed/accuracy will hopefully be included soon.

Install and Use

npm i fit-tnt
import { TNT } from 'fit-tnt';

const A = [
  [1, 2, 3],
  [4, 5, 6],
]; // 2x3
const b = [6, 12]; // or [[6],[12]]

try {
  const { XBest, metadata } = new TNT(A, b);
} catch (e) {
  console.error(e);
}

A related method is Ridge Regression.

Documentation

Comparison: TNT vs Pseudo-Inverse

  • Matrix Shape: rows 500, columns 200
  • Speed Up: 5.20
  • Inverting the shape below, TNT is slower.
┌───────────────┬─────────────────────┬─────────────────────┐
│ (index)       │       Avg Exec Time │           Avg Error │
├───────────────┼─────────────────────┼─────────────────────┤
│ TNT           │ 0.09470919929999999 │ 0.04945702797110891 │
│ PseudoInverse │ 0.49272041820000007 │ 0.04945702797110894 │
└───────────────┴─────────────────────┴─────────────────────┘

Misc.

Linear systems appear everywhere: $A,x = b$. Unique solutions ($x$) rarely exist. Least-Squares is one way to select the best:

$G(x) = \mathrm{min}_x \lVert A,x -b \rVert_2^2$

Searching the gradient-vector $G(x)=\vec{0}$ we get the normal equation $A^T,A x = A^T b$

This is also a linear system! $S x = y$. If the symmetric matrix $S$ is positive definite (hence $A$ has l.i. cols.) then it is invertible and can be solved using $\mathrm{Cho(S)} = L L^T$ and solving two triangular systems, which is fast and simple.

When computed directly (as done here), $S=A^T,A$ has a condition number $\kappa (A^T A) = \kappa (A)^2$. So it will fail for near-singular $S$. Preconditioning tries to reduce this problem. Larger condition number also tends to slow the convergence of iterative methods.

TNT

The Conjugate Gradient for Normal Residual (CGNR) is a popular method for solving Sparse Least-Squares problems, where the design matrix has many zeros.

For wide-$A$, where $\frac{n}{m} \gt 1$ calculating and factoring $A^T A$ becomes computationally demanding, given its $n^2$ separate elements. Here pseudo-inverse will be faster. TNT tends to be faster when $m \geq n$.

TNT preconditions $A^T,A$ so that it has an inverse and a smaller condition number, then iteratively solves using CGNR.

Positive definite means that $x^T M x \gt 0$. In our case: $x^T ,(A^T A), x \gt 0$, and $(A,x)^T (A x) \gt 0$

The $(\ldots)$ are non-zero when the columns are linearly independent. If the columns of $A$ are linearly independent then it's invertible/non-singular, and $A^T A$ is invertible.

So we want to pre-condition $A^T A$ so that it is invertible, we also want to avoid tiny numbers in the diagonal of the decomposition.

  1. Carry out product: $N=A^T,A$ (N is Symmetric.)
  2. Cholesky Decomposition and factor: R, p = Cho(N)
  3. if !p: N = N + e\*I, $\epsilon$ being a tiny number.
  4. Residual $r_0 = A,x_0 - b$
  5. Gradient per coefficient ($r$), $g_0 = A^T r_0$
  6. Error in the coefficients $z_0 = R^{-1},g_0$
  7. Get $\alpha$ as a = dot(z,g)/dot (r,r)
  8. Update $x$ as $x_{i+1}=x_{i} + a_i\times p_i$
  9. Next residual $r_{i+1} = r_i - a_i \times r_i$
  10. New gradient $g_{i+1} = A^T r_{i+1}$
  11. New error in coefficients: $z_{i+1} = R^{-1},g_{i+1}$
  12. Get $\beta$ beta = dot(z_{i+1},g_{i+1})/dot (z_i,g_i)

License

MIT