Error Bounds for Generalized Group Sparsity

08/08/2020
by   Xinyu Zhang, et al.
2

In high-dimensional statistical inference, sparsity regularizations have shown advantages in consistency and convergence rates for coefficient estimation. We consider a generalized version of Sparse-Group Lasso which captures both element-wise sparsity and group-wise sparsity simultaneously. We state one universal theorem which is proved to obtain results on consistency and convergence rates for different forms of double sparsity regularization. The universality of the results lies in an generalization of various convergence rates for single regularization cases such as LASSO and group LASSO and also double regularization cases such as sparse-group LASSO. Our analysis identifies a generalized norm of ϵ-norm, which provides a dual formulation for our double sparsity regularization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2019

Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference

In this paper, we study sparse group Lasso for high-dimensional double s...
research
08/17/2011

Structured Sparsity and Generalization

We present a data dependent generalization bound for a large class of re...
research
01/15/2020

Support recovery and sup-norm convergence rates for sparse pivotal estimation

In high dimensional sparse regression, pivotal estimators are estimators...
research
03/22/2016

Localized Lasso for High-Dimensional Regression

We introduce the localized Lasso, which is suited for learning models th...
research
08/15/2020

Ultra high dimensional generalized additive model: Unified Theory and Methods

Generalized additive model is a powerful statistical learning and predic...
research
07/30/2021

Adaptive Optimizers with Sparse Group Lasso for Neural Networks in CTR Prediction

We develop a novel framework that adds the regularizers of the sparse gr...
research
01/26/2014

Near-Ideal Behavior of Compressed Sensing Algorithms

In a recent paper, it is shown that the LASSO algorithm exhibits "near-i...

Please sign up or login with your details

Forgot password? Click here to reset