The q-Gauss-Newton method for unconstrained nonlinear optimization

05/27/2021
by   Danijela Protic, et al.
0

A q-Gauss-Newton algorithm is an iterative procedure that solves nonlinear unconstrained optimization problems based on minimization of the sum squared errors of the objective function residuals. Main advantage of the algorithm is that it approximates matrix of q-second order derivatives with the first-order q-Jacobian matrix. For that reason, the algorithm is much faster than q-steepest descent algorithms. The convergence of q-GN method is assured only when the initial guess is close enough to the solution. In this paper the influence of the parameter q to the non-linear problem solving is presented through three examples. The results show that the q-GD algorithm finds an optimal solution and speeds up the iterative procedure.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro