An error bound for Lasso and Group Lasso in high dimensions

12/21/2019
by   Antoine Dedieu, et al.
0

We leverage recent advances in high-dimensional statistics to derive new L2 estimation upper bounds for Lasso and Group Lasso in high-dimensions. For Lasso, our bounds scale as (k^*/n) log(p/k^*)—n× p is the size of the design matrix and k^* the dimension of the ground truth β^*—and match the optimal minimax rate. For Group Lasso, our bounds scale as (s^*/n) log( G / s^* ) + m^* / n—G is the total number of groups and m^* the number of coefficients in the s^* groups which contain β^*—and improve over existing results. We additionally show that when the signal is strongly group-sparse, Group Lasso is superior to Lasso.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro