A Levenberg-Marquardt Method for Nonsmooth Regularized Least Squares

01/06/2023
by   Aleksandr Y. Aravkin, et al.
0

We develop a Levenberg-Marquardt method for minimizing the sum of a smooth nonlinear least-squar es term f(x) = 12F(x)_2^2 and a nonsmooth term h. Both f and h may be nonconvex. Steps are computed by minimizing the sum of a regularized linear least-squares model and a model of h using a first-order method such as the proximal gradient method. We establish global convergence to a first-order stationary point of both a trust-region and a regularization variant of the Levenberg-Marquardt method under the assumptions that F and its Jacobian are Lipschitz continuous and h is proper and lower semi-continuous. In the worst case, both methods perform O(ϵ^-2) iterations to bring a measure of stationarity below ϵ∈ (0, 1). We report numerical results on three examples: a group-lasso basis-pursuit denoise example, a nonlinear support vector machine, and parameter estimation in neuron firing. For those examples to be implementable, we describe in detail how to evaluate proximal operators for separable h and for the group lasso with trust-region constraint. In all cases, the Levenberg-Marquardt methods perform fewer outer iterations than a proximal-gradient method with adaptive step length and a quasi-Newton trust-region method, neither of which exploit the least-squares structure of the problem. Our results also highlight the need for more sophisticated subproblem solvers than simple first-order methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset