@novadynamics/robograph-node
v2.0.1
Published
Pure JavaScript graph optimization library. Solves nonlinear least-squares problems formulated as factor graphs using the Levenberg–Marquardt algorithm with Nielsen's adaptive damping and gain-ratio acceptance.
Keywords
Readme
robograph-node
Pure JavaScript graph optimization library. Solves nonlinear least-squares problems formulated as factor graphs using the Levenberg–Marquardt algorithm with Nielsen's adaptive damping and gain-ratio acceptance.
Overview
A factor graph represents an optimization problem as a set of nodes (unknown states) connected by constraints (cost factors). Each constraint contributes a quadratic cost:
$$C = \mathbf{e}(\mathbf{x})^T , \Omega , \mathbf{e}(\mathbf{x})$$
where $\mathbf{e}$ is an error function, $\Omega$ is an information (inverse covariance) matrix, and $\mathbf{x}$ is the state being optimized.
Two graph topologies are supported:
| Class | Structure | Solver |
|---|---|---|
| TDMonoGraph | Chain / tridiagonal | $O(N)$ block Thomas algorithm |
| PDGraph | General sparse | Block sparse Cholesky |
Installation
npm install @novadynamics/robograph-nodeToy example — fitting a 1D chain
Suppose you have three scalar positions $x_0, x_1, x_2$ with:
- A noisy GPS measurement pinning $x_0 \approx 0$
- Relative odometry measurements saying each node is 1 m ahead of the previous one
const { GNode, GSelfConstraint, GPairConstraint, TDMonoGraph } = require('@novadynamics/robograph-node').graph;
// --- Define state type ---
class ScalarNode extends GNode {
get free_parameters() { return 1; }
calculate_new_state(dp) { return { x: this.state.x + dp[0] }; }
}
// --- Define constraint types ---
// Absolute prior: penalises deviation from a target value
class AbsolutePrior extends GSelfConstraint {
constructor(target, sigma) {
super([[1 / (sigma * sigma)]]); // information = 1/variance
this.target = target;
}
calculate_error(state) { return [state.x - this.target]; }
calculate_jacobian(state) { return [[1]]; }
}
// Relative odometry: penalises deviation from a target spacing
class OdometryConstraint extends GPairConstraint {
constructor(delta, sigma) {
super([[1 / (sigma * sigma)]]);
this.delta = delta;
}
calculate_error(s1, s2) { return [s2.x - s1.x - this.delta]; }
calculate_jacobians(s1, s2) { return { jac_1: [[-1]], jac_2: [[1]] }; }
}
// --- Build graph ---
const graph = new TDMonoGraph();
// Poor initial guesses (all at 0)
graph.add_node(new ScalarNode({ x: 0 }));
graph.add_node(new ScalarNode({ x: 0 }));
graph.add_node(new ScalarNode({ x: 0 }));
// GPS pins x0 near 0 with σ = 0.1 m
graph.add_self_constraint(0, new AbsolutePrior(0, 0.1));
// Odometry says each step is 1 m with σ = 0.05 m
graph.add_pair_constraint(0, 1, new OdometryConstraint(1, 0.05));
graph.add_pair_constraint(1, 2, new OdometryConstraint(1, 0.05));
graph.compile();
// --- Solve ---
const { nodes, converged, iter } = graph.solve();
console.log(`Converged: ${converged} in ${iter} iterations`);
nodes.forEach((n, i) => console.log(`x${i} = ${n.state.x.toFixed(6)}`));
// x0 = 0.000000
// x1 = 1.000000
// x2 = 2.000000API
graph.solve(options?, callback?)
Runs the LM solver synchronously. Returns { nodes, converged, iter, total_step_norm, max_step_norm, grad_norm, rho }.
| Option | Default | Description |
|---|---|---|
| iter_max | 200 | Maximum iterations |
| gtol | 1e-8 | Gradient norm convergence tolerance (scale-invariant) |
| mtol | 1e-10 | Step norm convergence tolerance |
| ml | auto | Initial damping $\lambda_0 = 10^{-3} \cdot \max(\text{diag}(J^T\Omega J))$ |
The optional callback is called each iteration with { iter, ml, rho, grad_norm, total_step_norm, max_step_norm, updated, dp, graph }.
solve_async is identical but yields to the event loop between iterations and returns a Promise.
Implementing GNode
| Member | Required | Description |
|---|---|---|
| get free_parameters() | ✓ | Dimension of the update vector dp |
| calculate_new_state(dp) | ✓ | Returns the new state after applying step dp |
Implementing GSelfConstraint / GPairConstraint
| Member | Required | Description |
|---|---|---|
| get information() | ✓ | Information matrix $\Omega$ (use base constructor or override) |
| calculate_error(state, node) | ✓ | Returns error vector $\mathbf{e}$ |
| calculate_jacobian(state, node) | ✓ | Returns $\partial\mathbf{e}/\partial\mathbf{dp}$ at $\mathbf{dp}=0$ |
| calculate_jacobians(s1, s2, n1, n2) | pair only | Returns { jac_1, jac_2 } |
Algorithm
The solver uses Levenberg–Marquardt with:
- Gain ratio acceptance: $\rho = \Delta F_\text{actual} / \Delta F_\text{predicted}$; step accepted if $\rho > 0.25$
- Nielsen adaptive damping: $\lambda \leftarrow \lambda \cdot \max!\left(\tfrac{1}{3},, 1-(2\rho-1)^3\right)$ on accept; $\lambda \leftarrow \lambda\nu$, $\nu \leftarrow 2\nu$ on reject
- Scale-invariant $\lambda_0$: initialised from the Hessian diagonal, not a fixed constant
- Cost-scaled gradient norm: $|\nabla F|_\infty / (1 + F)$ for topology-agnostic convergence
