npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

slrm-logos

v5.12.0

Published

The efficient alternative to Neural Networks. Implements SLRM (Segmented Linear Regression Model) for neural compression and non-linear data modeling, achieving high precision with a fraction of the parameters of a traditional ANN.

Downloads

187

Readme

Segmented Linear Regression Model (SLRM)

This project implements the Segmented Linear Regression Model (SLRM), an alternative to traditional Artificial Neural Networks (ANNs). The SLRM models datasets with piecewise linear functions, using a neural compression process to reduce complexity without compromising precision beyond a user-defined tolerance.

The core of the solution is the compression algorithm, which transforms an unordered dataset (DataFrame / X, Y) into a final, highly optimized dictionary, ready for prediction.

Project Structure

  • slrm-logos.py: Contains the complete implementation of the training process (Creation, Optimization, Compression) and the prediction function (predict). This code generates the final SLRM dictionary consumed by the web application.
  • index.html: Implementation of the visualization using D3.js and Vanilla JavaScript, which shows the dataset and the SLRM prediction curve (the piecewise linear function).

SLRM Architecture: The Training Process (Compression)

SLRM training is achieved through four main sections, implemented sequentially in slrm-logos.py:

1. Base Dictionary Creation

The SLRM is a non-iterative model (Instant Training). The "training" begins by sorting the input dataset (X, Y) by the lowest to the highest value of X. This sorting transforms the initial DataFrame into the fundamental structure of the SLRM: a dictionary where each point (X, Y) is indexed by its X value.

Input Set Example:

To demonstrate the process, we use the following unordered dataset (Input $X$, Output $Y$):

[-6.00,-6.00]
[+2.00,+4.00]
[-8.00,-4.00]
[+0.00,+0.00]
[+4.00,+10.0]
[-4.00,-6.00]
[+6.00,+18.0]
[-5.00,-6.01]
[+3.00,+7.00]
[-2.00,-4.00]

Once sorted by $X$, this becomes the Base Dictionary:

// Base Dictionary (Sorted by X)
[-8.00,-4.00]
[-6.00,-6.00]
[-5.00,-6.01]
[-4.00,-6.00]
[-2.00,-4.00]
[+0.00,+0.00]
[+2.00,+4.00]
[+3.00,+7.00]
[+4.00,+10.0]
[+6.00,+18.0]

2. Optimization

Based on the sorted base dictionary, the linear function connecting each pair of adjacent points $(x_1, y_1)$ and $(x_2, y_2)$ is calculated. This step transforms the data $(X, Y)$ into the segment parameters:

  • Slope (P): Represents the Weight (W) of the segment. $$P = \frac{y_2 - y_1}{x_2 - x_1}$$
  • Y-Intercept (O): Represents the Bias (B) of the segment. $$O = y_1 - P \cdot x_1$$

The result is an Optimized Dictionary where each key $X_n$ (the start of the segment) stores the tuple $(P, O)$. This is the explicit knowledge of the model.

Optimized Dictionary Example (Weights and Biases):

// Optimized Dictionary (Weights and Biases)
[-8.00] (-1.00,-12.0)
[-6.00] (-0.01,-6.06)
[-5.00] (+0.01,-5.96)
[-4.00] (+1.00,-2.00)
[-2.00] (+2.00,+0.00)
[+0.00] (+2.00,+0.00)
[+2.00] (+3.00,-2.00)
[+3.00] (+3.00,-2.00)
[+4.00] (+4.00,-6.00)

3. Lossless Compression (Geometric Invariance)

This step eliminates the geometric redundancy of the model. If three consecutive points $(X_{n-1}, X_n, X_{n+1})$ lie on the same straight line, the intermediate point $X_n$ is considered redundant.

  • Criterion: If $\text{Slope}(X_{n-1}) \approx \text{Slope}(X_n)$, the point $X_n$ is removed from the dictionary.
  • Result: Intermediate "neurons" that do not contribute to a change in the curve's direction are eliminated, achieving lossless compression of the dictionary's geometric information.

Lossless Compression Example:

[+0.00] and [+3.00] are removed due to Slope redundancy, resulting in:

// Optimized Dictionary (Lossless Compression)
[-8.00] (-1.00,-12.0)
[-6.00] (-0.01,-6.06)
[-5.00] (+0.01,-5.96)
[-4.00] (+1.00,-2.00)
[-2.00] (+2.00,+0.00)
[+2.00] (+3.00,-2.00)
[+4.00] (+4.00,-6.00)

4. Lossy Compression (Human Criterion)

This is the step for maximum compression, where a human criterion (the tolerance $\epsilon$) is applied to eliminate points whose contribution to the global error is below a predefined threshold.

  • Tolerance ($\epsilon$): An acceptable maximum error value (e.g., $0.03$).
  • Permanence Criterion: The point $X_{\text{current}}$ is considered Relevant and is kept if the absolute error when interpolating between its neighbors is greater than $\epsilon$.

$$\text{Error} = | Y_{\text{true}} - Y_{\text{hat}} |$$

If $\text{Error} > \epsilon$, the point is kept. If $\text{Error} \le \epsilon$, it is removed (lossy compression).

Final Lossy Compression Example ($\epsilon=0.03$):

[-5.00] is removed as its error is $0.01 \le 0.03$ when interpolated between [-6.00] and [-4.00].

// Optimized Dictionary (Final Lossy Compression)
[-8.00] (-1.00,-12.0)
[-6.00] (+0.00,-6.00) // Adjusted parameters due to interpolation
[-4.00] (+1.00,-2.00)
[-2.00] (+2.00,+0.00)
[+2.00] (+3.00,-2.00)
[+4.00] (+4.00,-6.00)

5. Prediction and Generalization

The predict(X) function uses the final, compressed SLRM dictionary to generate instant inferences through a "search and execute" architecture.

5.1 Inference Mechanism

  1. Search: For a new input $X$, the model finds the key $X_n$ that is closest to and less than or equal to $X$ ($X_n \le X$) within the optimized dictionary.
  2. Execution: The linear formula is applied using the Weight ($P$) and Bias ($O$) parameters stored at that key through the Master Equation:

$$Y_{\text{predicted}} = X \cdot P + O$$

5.2 Generalization (Extrapolation)

The SLRM handles data outside the training limits in two ways:

  • Segmental Extrapolation: The boundary linear segment (the first or the last) is extended to infinity, maintaining its trajectory.
  • Zonal Projection: (Optional) Analysis of the progression of Weights ($P$) and Biases ($O$) near the limits to project the next segment based on the global network pattern.

6. SLRM Superiority: Efficiency vs. Standard Models

While SLRM is fundamentally an architecture for Knowledge Compression, its performance in modeling complex non-linear data surpasses standard parametric models and demonstrates structural efficiency against complex hierarchical models like Decision Trees.

A comparative test was conducted against scikit-learn models using a challenging 15-point non-linear dataset ($\epsilon=0.5$).

Performance and Complexity Metrics (Decision Sheet)

The results demonstrate that SLRM achieves near-perfect accuracy with the highest data compression, proving its structural superiority in terms of simplicity and interpretability.

| Model | $R^2$ (Coefficient of Determination) | Model Complexity | Compression Rate | | :--- | :--- | :--- | :--- | | SLRM (Segmented) | 0.9893 | 6 (Key Points/Segments) | 60.00% | | Decision Tree (Depth 5) | 0.9964 | 9 (Leaf Nodes/Regions) | 0% | | Polynomial (Degree 3) | 0.9328 | 4 (Coefficients) | 0% | | SLR (Simple Linear) | 0.7399 | 2 (Parameters) | 0% |

Conclusion: SLRM achieves $R^2=0.9893$ with 60% data compression using only 5 linear segments (6 key points). The Decision Tree achieves similar accuracy ($R^2=0.9964$) but requires 9 regions to do so, confirming SLRM's superior geometric efficiency and inherent simplicity.


7. SLRM Extensions and Operational Properties

The modular nature of the SLRM segments provides operational properties that distinguish it from iterative neural network models:

7.1 Modularity and Hot Swapping

Since each segment is autonomous and does not interact with the weights of other segments, the SLRM allows for Hot Swapping. This means that a specific sector of the dictionary can be updated, optimized, or new data added in real-time, without interrupting the inference operation of the rest of the network.

7.2 Non-Linear Activation and Multimodal Compression

The compression process can be extended to locally replace a set of multiple linear segments with a single higher-order function (e.g., quadratic or exponential), provided the substitution error remains within tolerance ($\epsilon$). This generates Multimodal Compression and further compacts the architecture.

7.3 Transparent Box (Full Interpretability)

The SLRM is a "transparent box" model. It stores knowledge explicitly (Slope $P$ and Y-Intercept $O$ for each segment). This allows for full traceability of every prediction and is ideal for environments requiring high interpretability and auditing.


8. Installation and Usage

SLRM-LOGOS is designed to be extremely lightweight with zero external dependencies.

Installation via NPM

npm install slrm-logos

JavaScript Usage Example (Node.js)

const { train_slrm, predict_slrm } = require('slrm-logos');

// 1. Training data (Format: "x, y")
const data = "1,2\n2,4\n3,8\n4,16";

// 2. Train the model with a tolerance (Epsilon) of 0.5
const { model, originalData, maxError } = train_slrm(data, 0.5);

// 3. Perform a prediction
const inputX = 2.5;
const prediction = predict_slrm(inputX, model, originalData);

console.log(`Prediction for X=${inputX}: Y=${prediction.y_pred}`);
console.log(`Model Max Error: ${maxError}`);

9. Conceptual Bibliography

The following conceptual references inspire or contrast with the fundamental principles of the Segmented Linear Regression Model (SLRM):

  1. Segmented Regression and Curve Fitting: Works on approximating complex functions using piecewise defined regression models.
  2. Quantization and Model Compression: Techniques aimed at reducing the size of neural models for implementation on memory-constrained hardware.
  3. White Box Models (Interpretability): Studies on the traceability and understanding of a prediction model's decisions.
  4. Modularity and Decoupled Architectures: Software design principles that allow for local modification without collateral effects.
  5. Time Series Theory: Works on detecting progression patterns (Metaprogression) to perform more accurate long-range extrapolations.

Project Resources & Navigation


Authors

  • Alex Kinetic
  • Logos

License

This project is licensed under the MIT License - see the LICENSE file for details.

"Simplicity is the ultimate sophistication." - Segmented Linear Regression Model (SLRM)