A Unifying Framework of High-Dimensional Sparse Estimation with Difference-of-Convex (DC) Regularizations

12/18/2018
by   Shanshan Cao, et al.
0

Under the linear regression framework, we study the variable selection problem when the underlying model is assumed to have a small number of nonzero coefficients (i.e., the underlying linear model is sparse). Non-convex penalties in specific forms are well-studied in the literature for sparse estimation. A recent work ahn2016difference has pointed out that nearly all existing non-convex penalties can be represented as difference-of-convex (DC) functions, which can be expressed as the difference of two convex functions, while itself may not be convex. There is a large existing literature on the optimization problems when their objectives and/or constraints involve DC functions. Efficient numerical solutions have been proposed. Under the DC framework, directional-stationary (d-stationary) solutions are considered, and they are usually not unique. In this paper, we show that under some mild conditions, a certain subset of d-stationary solutions in an optimization problem (with a DC objective) has some ideal statistical properties: namely, asymptotic estimation consistency, asymptotic model selection consistency, asymptotic efficiency. The aforementioned properties are the ones that have been proven by many researchers for a range of proposed non-convex penalties in the sparse estimation. Our assumptions are either weaker than or comparable with those conditions that have been adopted in other existing works. This work shows that DC is a nice framework to offer a unified approach to these existing work where non-convex penalty is involved. Our work bridges the communities of optimization and statistics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2018

Stochastic Optimization for DC Functions and Non-smooth Non-convex Regularizers with Non-asymptotic Convergence

Difference of convex (DC) functions cover a broad family of non-convex a...
research
05/19/2018

M-estimation with the Trimmed l1 Penalty

We study high-dimensional M-estimators with the trimmed ℓ_1 penalty. Whi...
research
07/01/2014

DC approximation approaches for sparse optimization

Sparse optimization refers to an optimization problem involving the zero...
research
06/14/2013

Relaxed Sparse Eigenvalue Conditions for Sparse Estimation via Non-convex Regularized Regression

Non-convex regularizers usually improve the performance of sparse estima...
research
02/26/2013

Convex vs nonconvex approaches for sparse estimation: GLasso, Multiple Kernel Learning and Hyperparameter GLasso

The popular Lasso approach for sparse estimation can be derived via marg...
research
06/19/2018

Estimation from Non-Linear Observations via Convex Programming with Application to Bilinear Regression

We propose a computationally efficient estimator, formulated as a convex...
research
02/23/2017

Horseshoe Regularization for Feature Subset Selection

Feature subset selection arises in many high-dimensional applications of...

Please sign up or login with your details

Forgot password? Click here to reset