npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@sparse-supernova/spike-qubo-solver

v2.0.0

Published

A spike-driven QUBO and Max-Cut solver for medium-scale combinatorial optimization in pure JavaScript

Readme

Spike QUBO Solver

npm version CI USL Repo-Sat Audit

USL repo-saturation audit passed — no proprietary algorithms or high-dimensional signatures detected.

Status: experimental but tested (see TEST_SUMMARY.md).

A lightweight, spike-based solver for QUBO and Max-Cut problems, designed for medium-scale combinatorial optimisation in pure JavaScript.

This package gives you:

  • A spike-style QUBO / Max-Cut solver.
  • Baseline algorithms (Simulated Annealing, greedy hill-climbing).
  • Simple metrics and benchmark scripts.
  • A small CLI for running JSON instances from the command line.

It is intended as a public optimisation sandbox – suitable for experiments, benchmarks, and teaching – without exposing any private sparse encoders, anomaly detectors, or advanced diagnostics.


Security / IP Notice

This repository contains only public, non-proprietary code intended for experimentation, benchmarking, and education.

No private algorithms, internal research components, or proprietary logic are included in this package.

Specifically, this repository does not include:

  • any Sparse Supernova encoders or signature kernels
  • any USAD (Universal Sparse Anomaly Detector) components
  • any USL (Universal Saturation Law) or FRAI asymmetry metrics
  • any Quantum-HAL, hardware abstraction, or neuromorphic stack logic
  • any optimisation engines, kernels, or data structures used in private systems
  • any code copied from private or internal repositories

All optimisation routines provided here (spike, simulated annealing, greedy) are generic classical heuristics implemented solely for public use.

This project is published under an open-source license for transparency and community experimentation.

The maintainers make no commitment that it reflects, approximates, or reveals any functionality of the private Sparse/USAD/USL/Q-HAL systems.


Features

  • Spike-based solver for QUBO and Max-Cut.
  • Echo-enhanced confidence metrics (NEW in v2.0) - Know which solutions to trust!
  • Baselines included:
    • Simulated Annealing (simulatedAnnealingQubo)
    • Greedy QUBO / Max-Cut (greedyQubo, greedyMaxCut)
  • Pure ESM, no native dependencies.
  • Metrics helpers (summarizeRuns) for quick benchmarking.
  • Examples + test scripts for small and medium problems.
  • CLI (spike-qubo-solver) for running JSON instances.

This repo contains only the core spike solver, simple baselines, and generic metrics.
Advanced diagnostics (e.g. USL/FRAI, auto-tuning, custom sparse encoders, or neuromorphic backends) live in separate internal tools and are not part of this package.


Who is this for?

  • Developers working in JavaScript/TypeScript who want a lightweight QUBO / Max-Cut solver without pulling in C++ or Python stacks.

  • People building edge or serverless systems (for example on Cloudflare Workers) who need millisecond-scale combinatorial optimization.

  • Researchers and hackers who want a simple spike-style heuristic baseline alongside simulated annealing or greedy solvers.

When to use vs not use

Use this when:

  • You have small to medium problems (up to low thousands of variables) where "good solutions fast" are more important than provable optimality.

  • You want to integrate QUBO / Max-Cut solving directly into Node.js, Cloudflare Workers, or other JS runtimes.

  • You need a simple, inspectable implementation for experiments, benchmarking, teaching, or prototyping.

Do not use this when:

  • You need mathematically proven optimal solutions or tight optimality gaps on very large instances (use exact solvers / commercial MIP/QP solvers instead).

  • You need advanced modeling features (constraints beyond QUBO, large-scale MIP, etc.) provided by full optimization frameworks.

  • You require hardware-accelerated or quantum hardware backends; this project is a pure software heuristic.


USL Repo-Sat Audit

This repository has been checked using a USL repo-saturation audit, a safety scan designed to ensure that no proprietary high-dimensional algorithms, internal research kernels, or signature-based optimisation components are present in the public codebase.

The audit verifies that the repository contains:

  • no sparse signature encoders
  • no anomaly-detection kernels
  • no universal scaling or asymmetry modules
  • no neuromorphic, quantum, or hardware abstraction logic
  • no high-dimensional patterns characteristic of internal systems

The current version of this package passed the audit, indicating that it contains only the intended public, generic optimisation heuristics and no private or sensitive IP.


Installation

npm install @sparse-supernova/spike-qubo-solver

npm: https://www.npmjs.com/package/@sparse-supernova/spike-qubo-solver

Node.js >= 20 is recommended.

API Demo

Try the solver without installing anything! The package is deployed as a Cloudflare Worker with a simple REST API.

Live endpoint: https://spike-qubo-solver-worker.sparsesupernova.workers.dev/api/solve

⚠️ Important: The endpoint only accepts POST requests. Browsers will send GET requests by default, so use curl, fetch, or a REST client.

Example: Solve a QUBO

Using curl:

curl -X POST https://spike-qubo-solver-worker.sparsesupernova.workers.dev/api/solve \
  -H "Content-Type: application/json" \
  -d '{
    "problem": {
      "kind": "qubo",
      "payload": [
        [0, 0, -1],
        [1, 1, -1],
        [2, 2, -1],
        [0, 1, 2]
      ]
    },
    "options": {
      "maxIterations": 1000
    }
  }'

Using JavaScript fetch:

const response = await fetch('https://spike-qubo-solver-worker.sparsesupernova.workers.dev/api/solve', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    problem: {
      kind: 'qubo',
      payload: [[0, 0, -1], [1, 1, -1], [2, 2, -1], [0, 1, 2]]
    },
    options: { maxIterations: 1000 }
  })
});
const result = await response.json();
console.log(result);

Example: Solve Max-Cut

Using curl:

curl -X POST https://spike-qubo-solver-worker.sparsesupernova.workers.dev/api/solve \
  -H "Content-Type: application/json" \
  -d '{
    "problem": {
      "kind": "maxcut",
      "payload": {
        "n": 4,
        "edges": [
          [0, 1, 1],
          [1, 2, 1],
          [2, 3, 1],
          [3, 0, 1]
        ]
      }
    },
    "options": {
      "maxIterations": 1000
    }
  }'

Response format:

{
  "bestEnergy": -2.5,
  "state": [1, 0, 1],
  "iterations": 1000,
  "timeMs": 12.5
}

Basic usage

QUBO

import { solveQubo } from '@sparse-supernova/spike-qubo-solver';

const qubo = {
  n: 3,
  terms: [
    [0, 0, -1],
    [1, 1, -1],
    [2, 2, -1],
    [0, 1, 2],
    [1, 2, 2]
  ]
};

const result = await solveQubo(qubo, {
  maxSteps: 1200,
  trace: true
});

console.log('Best energy:', result.bestEnergy);
console.log('Iterations:', result.iterations);
console.log('Time (ms):', result.timeMs);

Max-Cut

import { solveMaxCut } from '@sparse-supernova/spike-qubo-solver';

const graph = {
  n: 4,
  edges: [
    [0, 1, 1],
    [1, 2, 1],
    [2, 3, 1],
    [3, 0, 1]
  ]
};

const result = await solveMaxCut(graph, {
  maxSteps: 1500,
  trace: false
});

console.log('Cut value:', result.cutValue);
console.log('Iterations:', result.iterations);
console.log('Time (ms):', result.timeMs);

Echo-Enhanced Confidence Metrics (NEW in v2.0)

The echo protocol provides confidence metrics for solutions, allowing you to know which solutions to trust for production deployment.

How It Works

The echo protocol uses a forward-backward optimization technique:

  1. Forward phase: Standard annealing (hot → cold) finds a solution
  2. Checkpoint: Save state at midpoint
  3. Backward phase: Reverse annealing (cold → hot) tests stability
  4. Measure: How well did we recover the initial state?

High echo fidelity = deep minimum = trustworthy solution
Low echo fidelity = shallow minimum = fragile solution

Basic Usage with Echo

import { solveQubo } from '@sparse-supernova/spike-qubo-solver';

const result = await solveQubo(qubo, {
  maxSteps: 1500,
  echo: true  // Enable echo protocol
});

console.log('Energy:', result.bestEnergy);
console.log('Confidence:', result.confidence);        // 0.0 - 1.0
console.log('Confidence Level:', result.confidenceLevel); // VERY_HIGH, HIGH, MEDIUM, LOW, VERY_LOW
console.log('Robustness:', result.robustness);        // VERY_ROBUST, ROBUST, MODERATE, FRAGILE, UNSTABLE
console.log('Is Stable:', result.isStable);           // true/false

Confidence Levels

| Fidelity | Level | Meaning | Use Case | |----------|-------|---------|----------| | > 0.95 | VERY_HIGH | Deep global minimum | Production ready ✅ | | 0.85-0.95 | HIGH | Good local minimum | Safe to use ✅ | | 0.70-0.85 | MEDIUM | Moderate | Verify first ⚠️ | | 0.50-0.70 | LOW | Fragile | Retry needed ⚠️ | | < 0.50 | VERY_LOW | Unstable | Not trustworthy ❌ |

Convenience Functions

import { 
  solveQuboWithConfidence, 
  solveQuboForProduction 
} from '@sparse-supernova/spike-qubo-solver';

// Always use echo protocol
const result = await solveQuboWithConfidence(qubo, {
  maxSteps: 1500
});

// Production validation (throws if not approved)
try {
  const productionResult = await solveQuboForProduction(qubo, {
    maxSteps: 1500,
    minConfidence: 0.85  // Default: 0.85
  });
  deployToProduction(productionResult.solution);
} catch (error) {
  console.log('Solution not production-ready:', error.message);
}

Performance Trade-offs

| Mode | Speed | Confidence | Use When | |------|-------|-----------|----------| | Standard | ✅ Fast (~50ms) | ❌ None | Prototyping, well-known problems | | Echo | ⚠️ 2x slower (~100ms) | ✅ Yes | Production, critical applications |

Example: Echo vs Standard

// Standard (fast, no confidence)
const standard = await solveQubo(qubo, { maxSteps: 1000 });
console.log('Energy:', standard.bestEnergy);
// No confidence metric available

// Echo (2x time, with confidence)
const echo = await solveQubo(qubo, { maxSteps: 1000, echo: true });
console.log('Energy:', echo.bestEnergy);
console.log('Confidence:', echo.confidence);  // 0.92
console.log('Level:', echo.confidenceLevel);  // 'HIGH'
console.log('Stable:', echo.isStable);        // true

if (echo.isStable && echo.confidence > 0.9) {
  console.log('✅ High confidence - solution is trustworthy!');
}

See examples/example_echo.mjs for complete examples.

Baselines

You can compare the spike solver against the built-in baselines.

Simulated Annealing (QUBO)

import { simulatedAnnealingQubo } from '@sparse-supernova/spike-qubo-solver';

const res = await simulatedAnnealingQubo(qubo, {
  maxSteps: 3000,
  T0: 5.0,
  alpha: 0.995,
  recordTrace: false
});

console.log('SA best energy:', res.bestEnergy);
console.log('SA time (ms):', res.timeMs);

Greedy hill-climbing

import { greedyQubo, greedyMaxCut } from '@sparse-supernova/spike-qubo-solver';

const quboGreedy = await greedyQubo(qubo, { maxPasses: 10 });
const maxCutGreedy = await greedyMaxCut(graph, { maxPasses: 10 });

console.log('Greedy QUBO energy:', quboGreedy.bestEnergy);
console.log('Greedy Max-Cut value:', maxCutGreedy.bestCut);

CLI

After installation, you can use the CLI:

# QUBO: JSON file with { n, terms }
spike-qubo-solver solve-qubo examples/qubo_example.json

# Max-Cut: JSON file with { n, edges }
spike-qubo-solver solve-maxcut examples/graph_example.json

The CLI prints a JSON result (energy, iterations, time, etc.) to stdout.

Public API (Supported Surface)

This section defines the complete supported API surface for spike-qubo-solver.

No other functions, modules, or behaviours are considered public or stable.

Anything not listed here is internal and may change without notice.


1. QUBO Solver

solveQubo(qubo, options?) → Promise<ResultQubo>

Solve a Quadratic Unconstrained Binary Optimisation problem.

Parameters:

| Name | Type | Description | |------|------|-------------| | qubo | { n: number, terms: [i,j,q][] } | QUBO instance in sparse triplet format | | options.maxSteps | number | Max iterations (default: 2000) | | options.seed | number | Optional seed for reproducibility | | options.trace | boolean | If true, return energy trace |

Returns:

interface ResultQubo {
  bestEnergy: number;
  state: number[];        // 0/1 assignment
  iterations: number;
  timeMs: number;
  trace?: { step: number; energy: number }[];
}

2. Max-Cut Solver

solveMaxCut(graph, options?) → Promise<ResultMaxCut>

Solve Max-Cut using the same spike optimiser.

Parameters:

| Name | Type | Description | |------|------|-------------| | graph | { n: number, edges: [i,j,w][] } | Undirected weighted graph | | options.maxSteps | number | Max steps (default: 2000) | | options.seed | number | Optional seed | | options.trace | boolean | Trace toggle |

Returns:

interface ResultMaxCut {
  cutValue: number;
  state: number[];        // 0/1 cut membership
  iterations: number;
  timeMs: number;
  trace?: { step: number; energy: number }[];
}

3. Encoders

encodeMaxCutToQubo(graph) → { n, terms }

Convert a Max-Cut instance to an equivalent QUBO.

evaluateMaxCut(graph, state) → number

Compute the cut value of a given 0/1 state.


4. Baselines (Included for Comparison)

All baselines are generic classical heuristics — safe to expose publicly and not related to private algorithms.

simulatedAnnealingQubo(qubo, options?) → Promise<ResultSA>

Parameters:

  • maxSteps (default: 3000)
  • T0 (initial temperature, default: 5.0)
  • alpha (cooling factor, default: 0.995)
  • recordTrace (boolean)

Returns:

{
  bestEnergy: number;
  bestState: number[];
  iterations: number;
  timeMs: number;
  acceptedMoves: number;
  trace?: { step: number; energy: number }[];
}

greedyQubo(qubo, options?) → Promise<{ bestEnergy: number, bestState: number[], passes: number }>

Simple hill-climber.

Parameters:

  • maxPasses (default: 10)

greedyMaxCut(graph, options?) → Promise<{ bestCut: number, bestState: number[], passes: number }>

Greedy Max-Cut improvement.

Parameters:

  • maxPasses (default: 10)

5. Metrics

summarizeRuns(results[]) → Summary

Aggregate statistics over multiple runs:

  • min / max / mean / std best energy
  • min / max / mean / std timeMs
  • min / max / mean / std iterations

Safe, generic statistics only.


6. CLI

spike-qubo-solver solve-qubo <path | url>

Run solver on a QUBO JSON file.

spike-qubo-solver solve-maxcut <path | url>

Run solver on a Max-Cut JSON file.


❌ Explicitly Out of Scope (Not Supported, Never Exported)

This table is the important safety guarantee.

| Area | Status | Reason | |------|--------|--------| | Sparse encoders / signature kernels | ❌ Not public | Proprietary IP | | USAD anomaly detection | ❌ Not public | Proprietary IP | | Universal Saturation Law (USL) | ❌ Not public | Proprietary physics layer | | FRAI / asymmetry metrics | ❌ Not public | Private research | | Q-HAL / neuromorphic device abstraction | ❌ Not public | Internal only | | Any file under private repos | ❌ Never exported | Safety/IP boundary | | Quantum-driven kernels | ❌ Not included | Private research |


Benchmarks & metrics

There are simple benchmark scripts under tests/:

  • npm run bench – small, illustrative benchmark.
  • npm run bench:compare – compare spike vs SA vs greedy on random QUBO/Max-Cut instances.

The summarizeRuns helper in src/metrics.mjs lets you quickly compute min/mean/std summaries over multiple runs.

There is also a generator for ready-to-run benchmark instances:

npm run gen:bench-instances

This populates examples/benchmarks/ with QUBO and Max-Cut problems at different sizes (e.g. 50, 100, 300 variables). In our own experiments, the 300-variable regime is often a practical "sweet spot" for spike-style heuristics: rich dynamics at millisecond-scale runtimes.

Why ~300 variables?

In our internal experiments, problems around 300 variables often emerge as a practical sweet spot for this spike-style solver:

  • They are large enough to be interesting (non-trivial structure, real combinatorial difficulty).
  • They are still small enough to solve in milliseconds on a laptop or edge runtime.
  • The solver's search dynamics remain rich:
    • frequent improvements early on,
    • meaningful refinements later in the run,
    • without getting stuck too quickly.
  • The energy-per-millisecond efficiency tends to peak in this regime.

By contrast:

  • Much smaller instances (e.g. n=10–50) are usually "too easy" – everything works, but there is not much to learn about the algorithm's behaviour.
  • Much larger instances (n>300) are still solvable, but:
    • runtimes grow,
    • improvements become more front-loaded,
    • and the marginal benefit per unit of compute drops.

For that reason, the examples and generated benchmark instances focus on sizes up to around 300 variables as a good balance between realism and runtime/energy cost.

Carbon & efficiency note

This library is designed for medium-scale experiments.
The spike solver typically runs in milliseconds for problems in the tens–low thousands of variables on a laptop or edge runtime, which keeps both compute and energy use modest.

For very large or production-critical deployments, you should treat this package as a prototype / research tool, not as an energy-optimised production engine.

As a rough rule of thumb:

  • Instances up to ~300 variables are a good fit for everyday experiments and small-scale benchmarking (millisecond runtimes on typical hardware).
  • Instances significantly larger than this (e.g. n > 300) are better treated as research-only or heavy analysis cases:
    • they may consume noticeably more time and energy,
    • they are more sensitive to solver settings,
    • and the marginal improvement per unit of compute tends to diminish.

If you work with higher-dimensional problems, it is recommended to measure runtime and energy use explicitly and to treat those runs as occasional heavy jobs rather than default workloads.

JSON Formats

This library accepts simple JSON formats for QUBO and Max-Cut instances. These formats are intentionally minimal and easy to create by hand.


QUBO Format (qubo.json)

A QUBO is described as:

  • n: number of variables
  • terms: array of [i, j, q] entries representing the quadratic matrix Q
    (only non-zero terms need to be listed)

Example

{
  "n": 5,
  "terms": [
    [0, 0, -1.0],
    [1, 1, -1.0],
    [2, 2, -1.0],
    [0, 1, 2.0],
    [1, 2, -0.5],
    [3, 4, 1.2]
  ]
}

Meaning:

Minimise the quadratic form
E(x) = Σ Q[i][j] × x[i] × x[j]

Terms may be upper-triangular or full; the solver handles duplicates cleanly.

Values may be floats or integers.


Max-Cut Format (graph.json)

A Max-Cut graph is described as:

  • n: number of nodes
  • edges: list of [i, j, weight]

Example

{
  "n": 6,
  "edges": [
    [0, 1, 1.0],
    [1, 2, 0.8],
    [2, 3, 1.1],
    [3, 4, 0.9],
    [4, 5, 1.0],
    [5, 0, 1.0]
  ]
}

Meaning:

Maximise Σ weight × 1_{x[i] ≠ x[j]}

Weights may be float or integer.

Edges are undirected; only one direction is needed.


Expected CLI Usage

After installing:

spike-qubo-solver solve-qubo examples/qubo.json
spike-qubo-solver solve-maxcut examples/graph.json

The CLI prints:

  • best energy / cut value
  • iterations
  • time in ms
  • (optional) trace data

Status and roadmap

Status: experimental, API may evolve.

Near-term roadmap:

  • Additional public problem instances (QUBO / Max-Cut).
  • Optional TypeScript type definitions.
  • More benchmarking tools (JSON/CSV output for plotting).

Contributing

See CONTRIBUTING.md for guidelines on contributing to this project.

License

This project is licensed under the Apache License 2.0 – see the LICENSE file for details.


Keywords: QUBO, Max-Cut, simulated annealing, spike-based annealer, combinatorial optimization, Cloudflare Workers, Node.js.