numopt-js
v0.2.0
Published
A flexible numerical optimization library for JavaScript/TypeScript that works smoothly in browsers
Maintainers
Readme
numopt-js
A flexible numerical optimization library for JavaScript/TypeScript that works smoothly in browsers. This library addresses the lack of flexible continuous optimization libraries for JavaScript that work well in browser environments.
Documentation
- API Reference (GitHub Pages): https://takuto-na.github.io/numopt-js/
- Source Repository: https://github.com/takuto-NA/numopt-js
Features
- Gradient Descent: Simple, robust optimization algorithm with line search support
- Line Search: Backtracking line search with Armijo condition for optimal step sizes (following Nocedal & Wright, Numerical Optimization (2nd ed.), Algorithm 3.1)
- Gauss-Newton Method: Efficient method for nonlinear least squares problems
- Levenberg-Marquardt Algorithm: Robust algorithm combining Gauss-Newton with damping
- Constrained Gauss-Newton: Efficient constrained nonlinear least squares using effective Jacobian
- Constrained Levenberg-Marquardt: Robust constrained nonlinear least squares with damping
- Adjoint Method: Efficient constrained optimization using adjoint variables (solves only one linear system per iteration instead of parameterCount systems)
- Numerical Differentiation: Automatic gradient and Jacobian computation via finite differences
- Browser-Compatible: Works seamlessly in modern browsers
- TypeScript-First: Full TypeScript support with comprehensive type definitions
- Debug-Friendly: Progress callbacks, verbose logging, and detailed diagnostics
Requirements
- Node.js >= 18.0.0
- Modern browsers with ES2020 support (for browser builds)
Installation
npm install numopt-jsBrowser Usage
numopt-js is designed to work seamlessly in browser environments. The library automatically provides a browser-optimized bundle that includes all dependencies.
Option 1: Automatic Detection (Recommended)
Modern bundlers and browsers with import maps support will automatically use the browser bundle (dist/index.browser.js) when importing numopt-js in a browser environment. No additional configuration is needed.
<script type="module">
import { gradientDescent } from './node_modules/numopt-js/dist/index.browser.js';
// Your code here
</script>Option 2: Using Import Maps
If you're using import maps, you can explicitly specify the browser bundle:
<script type="importmap">
{
"imports": {
"numopt-js": "./node_modules/numopt-js/dist/index.browser.js"
}
}
</script>
<script type="module">
import { gradientDescent } from 'numopt-js';
// Your code here
</script>Option 3: Using a Bundler
If you're using a bundler like Vite, Webpack, or Rollup, the bundler will automatically resolve the browser bundle based on the package.json exports configuration:
// In your TypeScript/JavaScript file
import { gradientDescent } from 'numopt-js';
// The bundler automatically uses dist/index.browser.js in browser buildsExample with Vite:
// vite.config.ts
import { defineConfig } from 'vite';
export default defineConfig({
// Vite automatically handles browser bundle resolution
});Troubleshooting
Problem: ReferenceError: exports is not defined when using in browser
Solution: Make sure you're using dist/index.browser.js instead of dist/index.js. The browser bundle includes all dependencies and is pre-configured for browser environments.
Problem: Module not found errors
Solution:
- Ensure you're using a modern bundler that supports
package.jsonexports - For direct browser usage, use import maps or explicitly import from
dist/index.browser.js - Check that your build tool supports ES modules
Examples
After installing dependencies with npm install, you can run the example scripts with npm run <script>:
npm run example:gradient– Runs a basic gradient-descent optimization example.npm run example:rosenbrock– Optimizes the Rosenbrock function to show robust convergence behavior.npm run example:lm– Demonstrates Levenberg-Marquardt for nonlinear curve fitting.npm run example:gauss-newton– Shows Gauss-Newton applied to a nonlinear least-squares problem.npm run example:adjoint– Introduces the adjoint method for constrained optimization.npm run example:adjoint-advanced– Explores a more advanced adjoint-based constrained problem.npm run example:constrained-gauss-newton– Solves constrained nonlinear least squares via the effective Jacobian.npm run example:constrained-lm– Uses constrained Levenberg-Marquardt for robust constrained least squares.
Quick Start
- Ensure Node.js 18+ is installed.
- Install the library with
npm install numopt-js. - Run the minimal example below to verify your setup:
import { gradientDescent } from 'numopt-js';
const cost = (params: Float64Array) => params[0] * params[0] + params[1] * params[1];
const grad = (params: Float64Array) => new Float64Array([2 * params[0], 2 * params[1]]);
const result = gradientDescent(new Float64Array([5, -3]), cost, grad, {
maxIterations: 200,
tolerance: 1e-6,
useLineSearch: true,
});
console.log(result.parameters);Pick an algorithm:
- Gradient Descent — stable first choice for smooth problems (see below)
- Gauss-Newton — efficient for nonlinear least squares when residuals are available
- Levenberg–Marquardt — robust least-squares solver with damping
- Constrained methods & Adjoint — enforce constraints with effective Jacobians or adjoint variables
Examples
After npm install, you can try the bundled scripts:
npm run example:gradient— basic gradient descent on a quadratic bowlnpm run example:rosenbrock— Rosenbrock optimization with line searchnpm run example:gauss-newton— nonlinear least squares with Gauss-Newtonnpm run example:lm— Levenberg–Marquardt curve fittingnpm run example:adjoint— simple adjoint-based constrained optimizationnpm run example:adjoint-advanced— adjoint method with custom Jacobiansnpm run example:constrained-gauss-newton— constrained least squares via effective Jacobiannpm run example:constrained-lm— constrained Levenberg–Marquardt
Gradient Descent
Based on standard steepest-descent with backtracking line search (Nocedal & Wright, "Numerical Optimization" 2/e, Ch. 2; Boyd & Vandenberghe, "Convex Optimization", Sec. 9.3).
import { gradientDescent } from 'numopt-js';
// Define cost function and gradient
const costFunction = (params: Float64Array) => {
return params[0] * params[0] + params[1] * params[1];
};
const gradientFunction = (params: Float64Array) => {
return new Float64Array([2 * params[0], 2 * params[1]]);
};
// Optimize
const initialParams = new Float64Array([5.0, -3.0]);
const result = gradientDescent(initialParams, costFunction, gradientFunction, {
maxIterations: 1000,
tolerance: 1e-6,
useLineSearch: true
});
console.log('Optimized parameters:', result.parameters);
console.log('Final cost:', result.finalCost);
console.log('Converged:', result.converged);Using Result Formatter: For better formatted output, use the built-in result formatter:
import { gradientDescent, printGradientDescentResult } from 'numopt-js';
const result = gradientDescent(initialParams, costFunction, gradientFunction, {
maxIterations: 1000,
tolerance: 1e-6,
useLineSearch: true
});
// Automatically formats and prints the result
printGradientDescentResult(result);Levenberg-Marquardt (Nonlinear Least Squares)
import { levenbergMarquardt } from 'numopt-js';
// Define residual function
const residualFunction = (params: Float64Array) => {
const [a, b] = params;
const residuals = new Float64Array(xData.length);
for (let i = 0; i < xData.length; i++) {
const predicted = a * xData[i] + b;
residuals[i] = predicted - yData[i];
}
return residuals;
};
// Optimize (with automatic numerical Jacobian)
const initialParams = new Float64Array([0, 0]);
const result = levenbergMarquardt(initialParams, residualFunction, {
useNumericJacobian: true,
maxIterations: 100,
tolGradient: 1e-6
});
console.log('Optimized parameters:', result.parameters);
console.log('Final residual norm:', result.finalResidualNorm);Using Result Formatter:
import { levenbergMarquardt, printLevenbergMarquardtResult } from 'numopt-js';
const result = levenbergMarquardt(initialParams, residualFunction, {
useNumericJacobian: true,
maxIterations: 100,
tolGradient: 1e-6
});
printLevenbergMarquardtResult(result);With User-Provided Jacobian
import { levenbergMarquardt } from 'numopt-js';
import { Matrix } from 'ml-matrix';
const jacobianFunction = (params: Float64Array) => {
// Compute analytical Jacobian
return new Matrix(/* ... */);
};
const result = levenbergMarquardt(initialParams, residualFunction, {
jacobian: jacobianFunction, // User-provided Jacobian in options
maxIterations: 100
});Numerical Differentiation
If you don't have analytical gradients or Jacobians, you can use numerical differentiation:
Option 1: Helper Functions (Recommended)
The easiest way to use numerical differentiation is with the helper functions:
import { gradientDescent, createFiniteDiffGradient } from 'numopt-js';
const costFn = (params: Float64Array) => {
return Math.pow(params[0] - 3, 2) + Math.pow(params[1] - 2, 2);
};
// Create a gradient function automatically
const gradientFn = createFiniteDiffGradient(costFn);
const result = gradientDescent(
new Float64Array([0, 0]),
costFn,
gradientFn, // No parameter order confusion!
{ maxIterations: 100, tolerance: 1e-6 }
);Option 2: Direct Usage
You can also use finiteDiffGradient directly:
import { gradientDescent, finiteDiffGradient } from 'numopt-js';
const costFn = (params: Float64Array) => {
return Math.pow(params[0] - 3, 2) + Math.pow(params[1] - 2, 2);
};
const result = gradientDescent(
new Float64Array([0, 0]),
costFn,
(params) => finiteDiffGradient(params, costFn), // ⚠️ Note: params first!
{ maxIterations: 100, tolerance: 1e-6 }
);Important: When using finiteDiffGradient directly, note the parameter order:
- ✅ Correct:
finiteDiffGradient(params, costFn) - ❌ Wrong:
finiteDiffGradient(costFn, params)
Custom Step Size
Both approaches support custom step sizes for the finite difference approximation:
// With helper function
const gradientFn = createFiniteDiffGradient(costFn, { stepSize: 1e-8 });
// Direct usage
const gradient = finiteDiffGradient(params, costFn, { stepSize: 1e-8 });Adjoint Method (Constrained Optimization)
The adjoint method efficiently solves constrained optimization problems by solving for an adjoint variable λ instead of explicitly inverting matrices. This requires solving only one linear system per iteration, making it much more efficient than naive approaches.
Mathematical background: For constraint c(p, x) = 0, the method computes df/dp = ∂f/∂p - λ^T ∂c/∂p where λ solves (∂c/∂x)^T λ = (∂f/∂x)^T.
Constrained Least Squares: For residual functions r(p, x) with constraints c(p, x) = 0, the library provides constrained Gauss-Newton and Levenberg-Marquardt methods. These use the effective Jacobian J_eff = r_p - r_x C_x^+ C_p to capture constraint effects, enabling quadratic convergence near the solution while maintaining constraint satisfaction.
import { adjointGradientDescent } from 'numopt-js';
// Define cost function: f(p, x) = p² + x²
const costFunction = (p: Float64Array, x: Float64Array) => {
return p[0] * p[0] + x[0] * x[0];
};
// Define constraint: c(p, x) = p + x - 1 = 0
const constraintFunction = (p: Float64Array, x: Float64Array) => {
return new Float64Array([p[0] + x[0] - 1.0]);
};
// Initial values (should satisfy constraint: c(p₀, x₀) = 0)
const initialP = new Float64Array([2.0]);
const initialX = new Float64Array([-1.0]); // 2 + (-1) - 1 = 0
// Optimize
const result = adjointGradientDescent(
initialP,
initialX,
costFunction,
constraintFunction,
{
maxIterations: 100,
tolerance: 1e-6,
useLineSearch: true,
logLevel: 'DEBUG' // Enable detailed iteration logging
}
);
console.log('Optimized parameters:', result.parameters);
console.log('Final states:', result.finalStates);
console.log('Final cost:', result.finalCost);
console.log('Constraint norm:', result.finalConstraintNorm);With Residual Functions: The method also supports residual functions r(p, x) where f = 1/2 r^T r:
// Residual function: r(p, x) = [p - 0.5, x - 0.5]
const residualFunction = (p: Float64Array, x: Float64Array) => {
return new Float64Array([p[0] - 0.5, x[0] - 0.5]);
};
const result = adjointGradientDescent(
initialP,
initialX,
residualFunction, // Can use residual function directly
constraintFunction,
{ maxIterations: 100, tolerance: 1e-6 }
);With Analytical Derivatives: For better performance, you can provide analytical partial derivatives:
import { Matrix } from 'ml-matrix';
const result = adjointGradientDescent(
initialP,
initialX,
costFunction,
constraintFunction,
{
dfdp: (p: Float64Array, x: Float64Array) => new Float64Array([2 * p[0]]),
dfdx: (p: Float64Array, x: Float64Array) => new Float64Array([2 * x[0]]),
dcdp: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
dcdx: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
maxIterations: 100
}
);Constrained Gauss-Newton (Constrained Nonlinear Least Squares)
For constrained nonlinear least squares problems, use the constrained Gauss-Newton method:
import { constrainedGaussNewton } from 'numopt-js';
// Define residual function: r(p, x) = [p - 0.5, x - 0.5]
const residualFunction = (p: Float64Array, x: Float64Array) => {
return new Float64Array([p[0] - 0.5, x[0] - 0.5]);
};
// Define constraint: c(p, x) = p + x - 1 = 0
const constraintFunction = (p: Float64Array, x: Float64Array) => {
return new Float64Array([p[0] + x[0] - 1.0]);
};
// Initial values (should satisfy constraint: c(p₀, x₀) = 0)
const initialP = new Float64Array([2.0]);
const initialX = new Float64Array([-1.0]); // 2 + (-1) - 1 = 0
// Optimize
const result = constrainedGaussNewton(
initialP,
initialX,
residualFunction,
constraintFunction,
{
maxIterations: 100,
tolerance: 1e-6
}
);
console.log('Optimized parameters:', result.parameters);
console.log('Final states:', result.finalStates);
console.log('Final cost:', result.finalCost);
console.log('Constraint norm:', result.finalConstraintNorm);Constrained Levenberg-Marquardt (Robust Constrained Least Squares)
For more robust constrained optimization, use the constrained Levenberg-Marquardt method:
import { constrainedLevenbergMarquardt } from 'numopt-js';
const result = constrainedLevenbergMarquardt(
initialP,
initialX,
residualFunction,
constraintFunction,
{
maxIterations: 100,
tolGradient: 1e-6,
tolStep: 1e-6,
tolResidual: 1e-6,
lambdaInitial: 1e-3,
lambdaFactor: 10.0
}
);With Analytical Derivatives: For better performance, provide analytical partial derivatives:
import { Matrix } from 'ml-matrix';
const result = constrainedGaussNewton(
initialP,
initialX,
residualFunction,
constraintFunction,
{
drdp: (p: Float64Array, x: Float64Array) => new Matrix([[1], [0]]),
drdx: (p: Float64Array, x: Float64Array) => new Matrix([[0], [1]]),
dcdp: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
dcdx: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
maxIterations: 100
}
);Gradient Descent
function gradientDescent(
initialParameters: Float64Array,
costFunction: CostFn,
gradientFunction: GradientFn,
options?: GradientDescentOptions
): GradientDescentResultLevenberg-Marquardt
function levenbergMarquardt(
initialParameters: Float64Array,
residualFunction: ResidualFn,
options?: LevenbergMarquardtOptions
): LevenbergMarquardtResultAdjoint Gradient Descent
function adjointGradientDescent(
initialParameters: Float64Array,
initialStates: Float64Array,
costFunction: ConstrainedCostFn | ConstrainedResidualFn,
constraintFunction: ConstraintFn,
options?: AdjointGradientDescentOptions
): AdjointGradientDescentResultConstrained Gauss-Newton
function constrainedGaussNewton(
initialParameters: Float64Array,
initialStates: Float64Array,
residualFunction: ConstrainedResidualFn,
constraintFunction: ConstraintFn,
options?: ConstrainedGaussNewtonOptions
): ConstrainedGaussNewtonResultConstrained Levenberg-Marquardt
function constrainedLevenbergMarquardt(
initialParameters: Float64Array,
initialStates: Float64Array,
residualFunction: ConstrainedResidualFn,
constraintFunction: ConstraintFn,
options?: ConstrainedLevenbergMarquardtOptions
): ConstrainedLevenbergMarquardtResultOptions
All algorithms support common options:
maxIterations?: number- Maximum number of iterations (default: 1000)tolerance?: number- Convergence tolerance (default: 1e-6)onIteration?: (iteration: number, cost: number, params: Float64Array) => void- Progress callbackverbose?: boolean- Enable verbose logging (default: false)
Gradient Descent Options
stepSize?: number- Fixed step size (learning rate). If not provided, line search is used (default: undefined, uses line search)useLineSearch?: boolean- Use line search to determine optimal step size (default: true)
Levenberg-Marquardt Options
jacobian?: JacobianFn- Analytical Jacobian function (if provided, used instead of numerical differentiation)useNumericJacobian?: boolean- Use numerical differentiation for Jacobian (default: true)jacobianStep?: number- Step size for numerical Jacobian computation (default: 1e-6)lambdaInitial?: number- Initial damping parameter (default: 1e-3)lambdaFactor?: number- Factor for updating lambda (default: 10.0)tolGradient?: number- Tolerance for gradient norm convergence (default: 1e-6)tolStep?: number- Tolerance for step size convergence (default: 1e-6)tolResidual?: number- Tolerance for residual norm convergence (default: 1e-6)
Levenberg-Marquardt References
- Moré, J. J., "The Levenberg-Marquardt Algorithm: Implementation and Theory," in Numerical Analysis, Lecture Notes in Mathematics 630, 1978. DOI: https://doi.org/10.1007/BFb0067700
- Lourakis, M. I. A., "A Brief Description of the Levenberg-Marquardt Algorithm," 2005 tutorial. PDF: https://users.ics.forth.gr/lourakis/levmar/levmar.pdf
Gauss-Newton Options
jacobian?: JacobianFn- Analytical Jacobian function (if provided, used instead of numerical differentiation)useNumericJacobian?: boolean- Use numerical differentiation for Jacobian (default: true)jacobianStep?: number- Step size for numerical Jacobian computation (default: 1e-6)
Adjoint Gradient Descent Options
dfdp?: (p: Float64Array, x: Float64Array) => Float64Array- Analytical partial derivative ∂f/∂p (optional)dfdx?: (p: Float64Array, x: Float64Array) => Float64Array- Analytical partial derivative ∂f/∂x (optional)dcdp?: (p: Float64Array, x: Float64Array) => Matrix- Analytical partial derivative ∂c/∂p (optional)dcdx?: (p: Float64Array, x: Float64Array) => Matrix- Analytical partial derivative ∂c/∂x (optional)stepSizeP?: number- Step size for numerical differentiation w.r.t. parameters (default: 1e-6)stepSizeX?: number- Step size for numerical differentiation w.r.t. states (default: 1e-6)constraintTolerance?: number- Tolerance for constraint satisfaction check (default: 1e-6)
Constrained Gauss-Newton Options
drdp?: (p: Float64Array, x: Float64Array) => Matrix- Analytical partial derivative ∂r/∂p (optional)drdx?: (p: Float64Array, x: Float64Array) => Matrix- Analytical partial derivative ∂r/∂x (optional)dcdp?: (p: Float64Array, x: Float64Array) => Matrix- Analytical partial derivative ∂c/∂p (optional)dcdx?: (p: Float64Array, x: Float64Array) => Matrix- Analytical partial derivative ∂c/∂x (optional)stepSizeP?: number- Step size for numerical differentiation w.r.t. parameters (default: 1e-6)stepSizeX?: number- Step size for numerical differentiation w.r.t. states (default: 1e-6)constraintTolerance?: number- Tolerance for constraint satisfaction check (default: 1e-6)
Constrained Levenberg-Marquardt Options
Extends ConstrainedGaussNewtonOptions with:
lambdaInitial?: number- Initial damping parameter (default: 1e-3)lambdaFactor?: number- Factor for updating lambda (default: 10.0)tolGradient?: number- Tolerance for gradient norm convergence (default: 1e-6)tolStep?: number- Tolerance for step size convergence (default: 1e-6)tolResidual?: number- Tolerance for residual norm convergence (default: 1e-6)
Note: The constraint function c(p, x) does not need to return a vector with the same length as the state vector x. The adjoint method supports both square and non-square constraint Jacobians (overdetermined and underdetermined systems). For non-square matrices, the method uses QR decomposition or pseudo-inverse to solve the adjoint equation.
Numerical Differentiation Options
stepSize?: number- Step size for finite difference approximation (default: 1e-6)
Result Formatting
The library provides helper functions for formatting and displaying optimization results in a consistent, user-friendly manner. These functions replace repetitive console.log statements and provide better readability.
Basic Usage
import { gradientDescent, printGradientDescentResult } from 'numopt-js';
const result = gradientDescent(initialParams, costFunction, gradientFunction, {
maxIterations: 1000,
tolerance: 1e-6
});
// Print formatted result
printGradientDescentResult(result);Available Formatters
printOptimizationResult()- For basicOptimizationResultprintGradientDescentResult()- ForGradientDescentResult(includes line search info)printLevenbergMarquardtResult()- ForLevenbergMarquardtResult(includes lambda)printConstrainedGaussNewtonResult()- For constrained optimization resultsprintConstrainedLevenbergMarquardtResult()- For constrained LM resultsprintAdjointGradientDescentResult()- For adjoint method resultsprintResult()- Type-safe overloaded function that works with any result type
Customization Options
All formatters accept an optional ResultFormatterOptions object:
import { printOptimizationResult } from 'numopt-js';
const startTime = performance.now();
const result = /* ... optimization ... */;
const elapsedTime = performance.now() - startTime;
printOptimizationResult(result, {
showSectionHeaders: true, // Show "=== Optimization Results ===" header
showExecutionTime: true, // Include execution time
elapsedTimeMs: elapsedTime, // Execution time in milliseconds
maxParametersToShow: 10, // Max parameters to display before truncating
parameterPrecision: 6, // Decimal places for parameters
costPrecision: 8, // Decimal places for cost/norms
constraintPrecision: 10 // Decimal places for constraint violations
});Formatting Strings Instead of Printing
If you need the formatted string instead of printing to console:
import { formatOptimizationResult } from 'numopt-js';
const formattedString = formatOptimizationResult(result);
// Use formattedString as needed (e.g., save to file, send to API, etc.)Automatic Parameter Formatting
The formatters automatically handle parameter arrays:
- Small arrays (≤3 elements): Displayed individually with labels (
p = 1.0, x = 2.0) - Medium arrays (4-10 elements): Displayed as array (
[1.0, 2.0, 3.0, ...]) - Large arrays (>10 elements): Truncated with "... and N more" (
[1.0, 2.0, ..., ... and 15 more])
Examples
See the examples/ directory for complete working examples:
- Gradient descent with Rosenbrock function
- Curve fitting with Levenberg-Marquardt
- Linear and nonlinear regression
- Constrained optimization with adjoint method
- Constrained Gauss-Newton method
- Constrained Levenberg-Marquardt method
To run the examples:
# Using npm scripts (recommended)
npm run example:gradient
npm run example:rosenbrock
npm run example:lm
npm run example:gauss-newton
# Or directly with tsx
npx tsx examples/gradient-descent-example.ts
npx tsx examples/curve-fitting-lm.ts
npx tsx examples/rosenbrock-optimization.ts
npx tsx examples/adjoint-example.ts
npx tsx examples/adjoint-advanced-example.ts
npx tsx examples/constrained-gauss-newton-example.ts
npx tsx examples/constrained-levenberg-marquardt-example.tsReferences
- Moré, J. J., "The Levenberg-Marquardt Algorithm: Implementation and Theory," in Numerical Analysis, Lecture Notes in Mathematics 630, 1978. DOI: https://doi.org/10.1007/BFb0067700
- Lourakis, M. I. A., "A Brief Description of the Levenberg-Marquardt Algorithm," 2005 tutorial. PDF: http://www.ics.forth.gr/~lourakis/publ/2005/LM.pdf
- Nocedal, J. & Wright, S. J., "Numerical Optimization" (2nd ed.), Chapter 12 (constrained optimization), 2006
MVP Scope
Included
- Gradient descent with line search
- Gauss-Newton method
- Levenberg-Marquardt algorithm
- Constrained Gauss-Newton method (nonlinear least squares with equality constraints)
- Constrained Levenberg-Marquardt method (robust constrained nonlinear least squares)
- Adjoint method for constrained optimization (equality constraints)
- Numerical differentiation (central difference)
- Browser compatibility
- TypeScript support
Not Included (Future Work)
- Automatic differentiation
- Constraint handling (inequality constraints)
- Global optimization guarantees
- Evolutionary algorithms (CMA-ES, etc.)
- Other optimization algorithms (BFGS, etc.)
- Sparse matrix support
- Parallel computation
Type Definitions
Why Float64Array?
This library uses Float64Array instead of regular JavaScript arrays for:
- Performance: Float64Array provides better performance for numerical computations
- Memory efficiency: More memory-efficient storage for large parameter vectors
- Type safety: Ensures all values are 64-bit floating-point numbers
To convert from regular arrays:
const regularArray = [1.0, 2.0, 3.0];
const float64Array = new Float64Array(regularArray);Why Matrix from ml-matrix?
The library uses Matrix from the ml-matrix package for Jacobian matrices because:
- Efficient matrix operations: Provides optimized matrix multiplication and linear algebra operations
- Well-tested: Mature library with comprehensive matrix operations
- Browser-compatible: Works seamlessly in browser environments
To create a Matrix from a 2D array:
import { Matrix } from 'ml-matrix';
const matrix = new Matrix([[1, 2], [3, 4]]);Troubleshooting
Common Errors and Solutions
Error: "Jacobian computation is required but not provided"
Problem: You're using levenbergMarquardt or gaussNewton without providing a Jacobian function and numerical Jacobian is disabled.
Solutions:
Enable numerical Jacobian (default behavior):
levenbergMarquardt(params, residualFn, { useNumericJacobian: true })Provide an analytical Jacobian function:
const jacobianFn = (params: Float64Array) => { // Your Jacobian computation return new Matrix(/* ... */); }; levenbergMarquardt(params, residualFn, { jacobian: jacobianFn, ...options })
Algorithm doesn't converge
Possible causes:
- Initial parameters are too far from the solution
- Tolerance is too strict
- Maximum iterations too low
- Step size (for gradient descent) is inappropriate
Solutions:
- Try different initial parameters
- Increase
maxIterations - Adjust tolerance values (
tolerance,tolGradient,tolStep,tolResidual) - For gradient descent, enable line search (
useLineSearch: true) or adjuststepSize - Enable verbose logging (
verbose: true) to see what's happening
Singular matrix error (Gauss-Newton)
Problem: The Jacobian matrix is singular or ill-conditioned, making the normal equations unsolvable.
Solutions:
- Use Levenberg-Marquardt instead (handles singular matrices better)
- Check your residual function for numerical issues
- Try different initial parameters
- Increase numerical Jacobian step size (
jacobianStep)
Singular matrix error (Constrained Gauss-Newton)
Problem: The effective Jacobian J_eff^T J_eff is singular or ill-conditioned.
Solutions:
- Use Constrained Levenberg-Marquardt instead (handles singular matrices better with damping)
- Check that constraint Jacobian
∂c/∂xis well-conditioned - Verify initial states satisfy constraints approximately
- Try different initial parameters and states
Singular matrix error (Adjoint Method)
Problem: The constraint Jacobian ∂c/∂x is singular or ill-conditioned, making the adjoint equation unsolvable.
Solutions:
- Check that
∂c/∂xis well-conditioned (if square) or has full rank (if non-square) - Verify initial states satisfy the constraint approximately (
c(p₀, x₀) ≈ 0) - Try different initial values that don't make
∂c/∂xsingular - For nonlinear constraints, ensure initial values are on the constraint manifold
Results don't match expectations
Check:
- Verify your cost/residual function is correct
- Check that gradient/Jacobian functions are correct (if provided)
- Try enabling
verbose: trueorlogLevel: 'DEBUG'to see iteration details - Use
onIterationcallback to monitor progress - Verify initial parameters are reasonable
- For adjoint method, ensure initial states satisfy constraints approximately
Debugging Tips
- Enable verbose logging: Set
verbose: trueto see detailed iteration information - Use progress callbacks: Use
onIterationto monitor convergence:const result = gradientDescent(params, costFn, gradFn, { onIteration: (iter, cost, params) => { console.log(`Iteration ${iter}: cost = ${cost}`); } }); - Check convergence status: Always check
result.convergedto see if optimization succeeded - Monitor gradient/residual norms: Check
finalGradientNormorfinalResidualNormto understand convergence quality
Requirements
- Node.js >= 18.0.0
- Modern browsers with ES2020 support (required for running in-browser examples)
License
MIT
Contributing
Contributions are welcome! Please read CODING_RULES.md before submitting pull requests.
