Iteratively reweighted ℓ_1 algorithms with extrapolation

10/22/2017
by   Peiran Yu, et al.
0

Iteratively reweighted ℓ_1 algorithm is a popular algorithm for solving a large class of optimization problems whose objective is the sum of a Lipschitz differentiable loss function and a possibly nonconvex sparsity inducing regularizer. In this paper, motivated by the success of extrapolation techniques in accelerating first-order methods, we study how widely used extrapolation techniques such as those in [4,5,22,28] can be incorporated to possibly accelerate the iteratively reweighted ℓ_1 algorithm. We consider three versions of such algorithms. For each version, we exhibit an explicitly checkable condition on the extrapolation parameters so that the sequence generated provably clusters at a stationary point of the optimization problem. We also investigate global convergence under additional Kurdyka-Łojasiewicz assumptions on certain potential functions. Our numerical experiments show that our algorithms usually outperform the general iterative shrinkage and thresholding algorithm in [21] and an adaptation of the iteratively reweighted ℓ_1 algorithm in [23, Algorithm 7] with nonmonotone line-search for solving random instances of log penalty regularized least squares problems in terms of both CPU time and solution quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2018

Nonconvex and Nonsmooth Sparse Optimization via Adaptively Iterative Reweighted Methods

We present a general formulation of nonconvex and nonsmooth sparse optim...
research
11/18/2017

Proximal Gradient Method with Extrapolation and Line Search for a Class of Nonconvex and Nonsmooth Problems

In this paper, we consider a class of possibly nonconvex, nonsmooth and ...
research
03/18/2013

A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems

Non-convex sparsity-inducing penalties have recently received considerab...
research
10/16/2017

A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problems

We consider a class of nonconvex nonsmooth optimization problems whose o...
research
04/05/2020

Regularized asymptotic descents for nonconvex optimization

In this paper we propose regularized asymptotic descent (RAD) methods fo...
research
03/22/2018

Maximum Consensus Parameter Estimation by Reweighted ℓ_1 Methods

Robust parameter estimation in computer vision is frequently accomplishe...
research
05/13/2018

The Global Optimization Geometry of Shallow Linear Neural Networks

We examine the squared error loss landscape of shallow linear neural net...

Please sign up or login with your details

Forgot password? Click here to reset