Regularizers for Structured Sparsity

10/04/2010
by   Charles A. Micchelli, et al.
0

We study the problem of learning a sparse linear regression vector under additional conditions on the structure of its sparsity pattern. This problem is relevant in machine learning, statistics and signal processing. It is well known that a linear regression can benefit from knowledge that the underlying regression vector is sparse. The combinatorial problem of selecting the nonzero components of this vector can be "relaxed" by regularizing the squared error with a convex penalty function like the ℓ_1 norm. However, in many applications, additional conditions on the structure of the regression vector and its sparsity pattern are available. Incorporating this information into the learning method may lead to a significant decrease of the estimation error. In this paper, we present a family of convex penalty functions, which encode prior knowledge on the structure of the vector formed by the absolute values of the regression coefficients. This family subsumes the ℓ_1 norm and is flexible enough to include different models of sparsity patterns, which are of practical and theoretical importance. We establish the basic properties of these penalty functions and discuss some examples where they can be computed explicitly. Moreover, we present a convergent optimization algorithm for solving regularized least squares with these penalty functions. Numerical simulations highlight the benefit of structured sparsity and the advantage offered by our approach over the Lasso method and other related methods.

READ FULL TEXT

page 9

page 13

page 28

research
08/15/2017

The Trimmed Lasso: Sparsity and Robustness

Nonconvex penalty methods for sparse modeling in linear regression have ...
research
12/14/2020

On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation

We present a novel algorithm that allows us to gain detailed insight int...
research
03/14/2022

Mixture Components Inference for Sparse Regression: Introduction and Application for Estimation of Neuronal Signal from fMRI BOLD

Sparse linear regression methods including the well-known LASSO and the ...
research
05/19/2018

M-estimation with the Trimmed l1 Penalty

We study high-dimensional M-estimators with the trimmed ℓ_1 penalty. Whi...
research
08/06/2015

A Knowledge Gradient Policy for Sequencing Experiments to Identify the Structure of RNA Molecules Using a Sparse Additive Belief Model

We present a sparse knowledge gradient (SpKG) algorithm for adaptively s...
research
03/25/2012

Greedy Sparsity-Constrained Optimization

Sparsity-constrained optimization has wide applicability in machine lear...
research
12/26/2019

Second order Poincaré inequalities and de-biasing arbitrary convex regularizers when p/n → γ

A new Central Limit Theorem (CLT) is developed for random variables of t...

Please sign up or login with your details

Forgot password? Click here to reset