Asymptotic properties for combined L_1 and concave regularization

05/11/2016
by   Yingying Fan, et al.
0

Two important goals of high-dimensional modeling are prediction and variable selection. In this article, we consider regularization with combined L_1 and concave penalties, and study the sampling properties of the global optimum of the suggested method in ultra-high dimensional settings. The L_1-penalty provides the minimum regularization needed for removing noise variables in order to achieve oracle prediction risk, while concave penalty imposes additional regularization to control model sparsity. In the linear model setting, we prove that the global optimum of our method enjoys the same oracle inequalities as the lasso estimator and admits an explicit bound on the false sign rate, which can be asymptotically vanishing. Moreover, we establish oracle risk inequalities for the method and the sampling properties of computable solutions. Numerical studies suggest that our method yields more stable estimates than using a concave penalty alone.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2016

Asymptotic equivalence of regularization methods in thresholded parameter space

High-dimensional data analysis has motivated a spectrum of regularizatio...
research
05/11/2016

High dimensional thresholded regression and shrinkage effect

High-dimensional sparse modeling via regularization provides a powerful ...
research
12/17/2015

Oracle inequalities for ranking and U-processes with Lasso penalty

We investigate properties of estimators obtained by minimization of U-pr...
research
07/01/2023

Sparse-Input Neural Network using Group Concave Regularization

Simultaneous feature selection and non-linear function estimation are ch...
research
09/12/2021

High-Dimensional Quantile Regression: Convolution Smoothing and Concave Regularization

ℓ_1-penalized quantile regression is widely used for analyzing high-dime...
research
04/26/2018

High-dimensional Penalty Selection via Minimum Description Length Principle

We tackle the problem of penalty selection of regularization on the basi...
research
10/27/2018

Regularization Effect of Fast Gradient Sign Method and its Generalization

Fast Gradient Sign Method (FSGM) is a popular method to generate adversa...

Please sign up or login with your details

Forgot password? Click here to reset