Class LevenbergMarquardt_F64<S extends DMatrix,​HM extends HessianMath>

java.lang.Object
org.ddogleg.optimization.GaussNewtonBase_F64<ConfigLevenbergMarquardt,​HM>
org.ddogleg.optimization.lm.LevenbergMarquardt_F64<S,​HM>
Direct Known Subclasses:
UnconLeastSqLevenbergMarquardt_F64, UnconLeastSqLevenbergMarquardtSchur_F64

public abstract class LevenbergMarquardt_F64<S extends DMatrix,​HM extends HessianMath>
extends GaussNewtonBase_F64<ConfigLevenbergMarquardt,​HM>

Implementation of Levenberg-Marquardt non-linear least squares optimization. At each iteration the following is sovled for:
(G[k] + v*()x[k] = -g[k] with v ≥ 0
where G[k] = F[k]TF[k] is an approximation to the hessian and is positive semi-definite, F[k] is the Jacobian, v is a tuning parameter that is either a scalar or a vector, x[k] is the step being estimated, and g[k] is the gradient.

Levenberg-Marquardt is a trust-region method but was formulated before trust-region methods had been defined. At each iteration it adjusted the value of 'v''. For smaller values it is closer to a Gauss-Newton step and has super linear convergence while for large values it becomes a gradient step

Levenberg's formulation is as follows:
(JTJ + λ I) = JTr
where λ is adjusted at each iteration.

Marquardt's formulation is as follows:
(JTJ + λ diag(JTJ )) = JTr
where λ is adjusted at each iteration.

•  JK. Madsen and H. B. Nielsen and O. Tingleff, "Methods for Non-Linear Least Squares Problems (2nd ed.)" Informatics and Mathematical Modelling, Technical University of Denmark
•  Peter Abeles, "DDogleg Technical Report: Nonlinear Optimization", Revision 1, August 2018
•  Levenberg, Kenneth (1944). "A Method for the Solution of Certain Non-Linear Problems in Least Squares". Quarterly of Applied Mathematics. 2: 164–168.
• Field Details

• MAX_LAMBDA

public static final double MAX_LAMBDA
Maximum allowed value of lambda
Constant Field Values
• math

public MatrixMath<S extends DMatrix> math
• residuals

public DMatrixRMaj residuals
• diagOrig

public DMatrixRMaj diagOrig
• diagStep

public DMatrixRMaj diagStep
• lambda

protected double lambda
Dampening parameter. Scalar that's adjusted at every step. smaller values for a Gauss-Newton step and larger values for a gradient step
• nu

protected double nu
• Constructor Details

• LevenbergMarquardt_F64

protected LevenbergMarquardt_F64​(MatrixMath<S> math, HM hessian)
• Method Details

• initialize

public void initialize​(double[] initial, int numberOfParameters, int numberOfFunctions)
Initializes the search.
Parameters:
initial - Initial state
numberOfParameters - number of parameters
numberOfFunctions - number of functions
• updateDerivates

protected boolean updateDerivates()
Computes derived
Specified by:
updateDerivates in class GaussNewtonBase_F64<ConfigLevenbergMarquardt,​HM extends HessianMath>
Returns:
true if it has converged or false if it has not
• computeStep

protected boolean computeStep()
Compute a new possible state and determine if it should be accepted or not. If accepted update the state
Specified by:
computeStep in class GaussNewtonBase_F64<ConfigLevenbergMarquardt,​HM extends HessianMath>
Returns:
true if it has converged or false if it has not
• checkConvergenceFTest

protected boolean checkConvergenceFTest​(double fx, double fx_prev)

Checks for convergence using f-test. f-test is defined differently for different problems

Returns:
true if converged or false if it hasn't converged
• costFromResiduals

public double costFromResiduals​(DMatrixRMaj residuals)
Given the residuals compute the cost
Parameters:
residuals - (Input) residuals/errors
Returns:
Least squares cost
• computeStep

protected boolean computeStep​(double lambda, DMatrixRMaj gradient, DMatrixRMaj step)
Adjusts the Hessian's diagonal elements value and computes the next step
Parameters:
lambda - (Input) tuning
step - (Output) step
Returns:
true if solver could compute the next step
• maximumLambdaNu

public boolean maximumLambdaNu()
If the size of lambda or nu has grown so large it's reached a limit stop optimizing
• computeResiduals

protected abstract void computeResiduals​(DMatrixRMaj x, DMatrixRMaj residuals)
Computes the residuals at state 'x'
Parameters:
x - (Input) state
residuals - (Output) residuals F(x) - Y
• configure

public void configure​(ConfigLevenbergMarquardt config)
Changes the optimization's configuration
Parameters:
config - New configuration