Class LevenbergMarquardt_F64<S extends DMatrix,HM extends HessianMath>
- Direct Known Subclasses:
UnconLeastSqLevenbergMarquardt_F64
,UnconLeastSqLevenbergMarquardtSchur_F64
Implementation of Levenberg-Marquardt non-linear least squares optimization. At each iteration the following
is sovled for:
(G[k] + v*()x[k] = -g[k] with v ≥ 0
where G[k] = F[k]TF[k] is an approximation to the hessian and is positive semi-definite, F[k] is
the Jacobian, v is a tuning parameter that is either a scalar or a vector, x[k] is the step being estimated,
and g[k] is the gradient.
Levenberg-Marquardt is a trust-region method but was formulated before trust-region methods had been defined. At each iteration it adjusted the value of 'v''. For smaller values it is closer to a Gauss-Newton step and has super linear convergence while for large values it becomes a gradient step
Levenberg's formulation is as follows:
(JTJ + λ I) = JTr
where λ is adjusted at each iteration.
Marquardt's formulation is as follows:
(JTJ + λ diag(JTJ )) = JTr
where λ is adjusted at each iteration.
- [1] JK. Madsen and H. B. Nielsen and O. Tingleff, "Methods for Non-Linear Least Squares Problems (2nd ed.)" Informatics and Mathematical Modelling, Technical University of Denmark
- [2] Peter Abeles, "DDogleg Technical Report: Nonlinear Optimization", Revision 1, August 2018
- [3] Levenberg, Kenneth (1944). "A Method for the Solution of Certain Non-Linear Problems in Least Squares". Quarterly of Applied Mathematics. 2: 164–168.
-
Nested Class Summary
Nested classes/interfaces inherited from class org.ddogleg.optimization.GaussNewtonBase_F64
GaussNewtonBase_F64.Mode
-
Field Summary
Modifier and TypeFieldDescriptionprotected double
Dampening parameter.static final double
Maximum allowed value of lambdaprotected double
Fields inherited from class org.ddogleg.optimization.GaussNewtonBase_F64
config, ftest_val, fx, gradient, gtest_val, hessian, hessianScaling, mode, p, sameStateAsCost, totalFullSteps, totalSelectSteps, verbose, verboseLevel, x, x_next
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprotected boolean
checkConvergenceFTest
(double fx, double fx_prev) Checks for convergence using f-test.protected abstract void
computeResiduals
(DMatrixRMaj x, DMatrixRMaj residuals) Computes the residuals at state 'x'protected boolean
Compute a new possible state and determine if it should be accepted or not.protected boolean
computeStep
(double lambda, DMatrixRMaj gradient, DMatrixRMaj step) Adjusts the Hessian's diagonal elements value and computes the next stepvoid
configure
(ConfigLevenbergMarquardt config) Changes the optimization's configurationdouble
costFromResiduals
(DMatrixRMaj residuals) Given the residuals compute the costvoid
initialize
(double[] initial, int numberOfParameters, int numberOfFunctions) Initializes the search.boolean
If the size of lambda or nu has grown so large it's reached a limit stop optimizingprotected boolean
Computes derivedMethods inherited from class org.ddogleg.optimization.GaussNewtonBase_F64
applyHessianScaling, checkConvergenceGTest, computeHessianScaling, computeHessianScaling, computePredictedReduction, functionGradientHessian, initialize, iterate, mode, setVerbose, undoHessianScalingOnParameters
-
Field Details
-
MAX_LAMBDA
public static final double MAX_LAMBDAMaximum allowed value of lambda- See Also:
-
math
-
residuals
-
diagOrig
-
diagStep
-
lambda
protected double lambdaDampening parameter. Scalar that's adjusted at every step. smaller values for a Gauss-Newton step and larger values for a gradient step -
nu
protected double nu
-
-
Constructor Details
-
LevenbergMarquardt_F64
-
-
Method Details
-
initialize
public void initialize(double[] initial, int numberOfParameters, int numberOfFunctions) Initializes the search.- Parameters:
initial
- Initial statenumberOfParameters
- number of parametersnumberOfFunctions
- number of functions
-
updateDerivates
protected boolean updateDerivates()Computes derived- Specified by:
updateDerivates
in classGaussNewtonBase_F64<ConfigLevenbergMarquardt,
HM extends HessianMath> - Returns:
- true if it has converged or false if it has not
-
computeStep
protected boolean computeStep()Compute a new possible state and determine if it should be accepted or not. If accepted update the state- Specified by:
computeStep
in classGaussNewtonBase_F64<ConfigLevenbergMarquardt,
HM extends HessianMath> - Returns:
- true if it has converged or false if it has not
-
checkConvergenceFTest
protected boolean checkConvergenceFTest(double fx, double fx_prev) Checks for convergence using f-test. f-test is defined differently for different problems
- Returns:
- true if converged or false if it hasn't converged
-
costFromResiduals
Given the residuals compute the cost- Parameters:
residuals
- (Input) residuals/errors- Returns:
- Least squares cost
-
computeStep
Adjusts the Hessian's diagonal elements value and computes the next step- Parameters:
lambda
- (Input) tuninggradient
- (Input) gradientstep
- (Output) step- Returns:
- true if solver could compute the next step
-
maximumLambdaNu
public boolean maximumLambdaNu()If the size of lambda or nu has grown so large it's reached a limit stop optimizing -
computeResiduals
Computes the residuals at state 'x'- Parameters:
x
- (Input) stateresiduals
- (Output) residuals F(x) - Y
-
configure
Changes the optimization's configuration- Parameters:
config
- New configuration
-