Iterative Smoothing Proximal Gradient for Regression with Structured Sparsity

05/31/2016
by   Fouad Hadj-Selem, et al.
0

In the context high-dimensionnal predictive models, we consider the problem of optimizing the sum of a smooth convex loss and a non-smooth convex penalty, whose proximal operator is known, and a non-smooth convex structured penalties such as total variation, or overlapping group lasso. We propose to smooth the structured penalty, since it allows a generic framework in which a large range of non-smooth convex structured penalties can be minimized without computing their proximal operators that are either not known or expensive to compute. The problem can be minimized with an accelerated proximal gradient method to benefit of (non-smoothed) sparsity-inducing penalties. We propose an expression of the duality gap to control the convergence of the global non-smooth problem. This expression is applicable to a large range of structured penalties. However, smoothing methods have many limitations that the proposed solver aims to overcome. Therefore, we propose a continuation algorithm, called CONESTA, that dynamically generates a decreasing sequence of smoothing parameters in order to maintain the optimal convergence speed towards any globally desired precision. At each continuation step, the aforementioned duality gap provides the current error and thus the next smaller prescribed precision. Given this precision, we propose a expression to calculate the optimal smoothing parameter, that minimizes the number of iterations to reach such precision. We demonstrate that CONESTA achieves an improved convergence rate compared to classical (without continuation) proximal gradient smoothing. Moreover, experiments conducted on both simulated and high-dimensional neuroimaging (MRI) data, exhibit that CONESTA significantly outperforms the excessive gap method, ADMM, classical proximal gradient smoothing and inexact FISTA in terms of convergence speed and/or precision of the solution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2012

Smoothing Proximal Gradient Method for General Structured Sparse Learning

We study the problem of learning high dimensional regression models regu...
research
02/07/2021

Structured Sparsity Inducing Adaptive Optimizers for Deep Learning

The parameters of a neural network are naturally organized in groups, so...
research
07/21/2014

Predictive support recovery with TV-Elastic Net penalty and logistic regression: an application to structural MRI

The use of machine-learning in neuroimaging offers new perspectives in e...
research
10/04/2019

Inexact Online Proximal-gradient Method for Time-varying Convex Optimization

This paper considers an online proximal-gradient method to track the min...
research
12/22/2015

FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging

The total variation (TV) penalty, as many other analysis-sparsity proble...
research
03/05/2016

A single-phase, proximal path-following framework

We propose a new proximal, path-following framework for a class of const...
research
09/08/2009

Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping

We consider the problem of estimating a sparse multi-response regression...

Please sign up or login with your details

Forgot password? Click here to reset