Regularization, sparse recovery, and median-of-means tournaments

01/15/2017
by   Gábor Lugosi, et al.
0

A regularized risk minimization procedure for regression function estimation is introduced that achieves near optimal accuracy and confidence under general conditions, including heavy-tailed predictor and response variables. The procedure is based on median-of-means tournaments, introduced by the authors in [8]. It is shown that the new procedure outperforms standard regularized empirical risk minimization procedures such as lasso or slope in heavy-tailed problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2021

On Empirical Risk Minimization with Dependent and Heavy-Tailed Data

In this work, we establish risk bounds for the Empirical Risk Minimizati...
research
06/10/2019

Mean estimation and regression under heavy-tailed distributions--a survey

We survey some of the recent advances in mean estimation and regression ...
research
04/16/2018

Structured Recovery with Heavy-tailed Measurements: A Thresholding Procedure and Optimal Rates

This paper introduces a general regularized thresholded least-square pro...
research
05/10/2019

Robust high dimensional learning for Lipschitz and convex losses

We establish risk bounds for Regularized Empirical Risk Minimizers (RERM...
research
07/17/2017

An optimal unrestricted learning procedure

We study learning problems in the general setup, for arbitrary classes o...
research
01/19/2021

On Monte-Carlo methods in convex stochastic optimization

We develop a novel procedure for estimating the optimizer of general con...
research
01/27/2023

Robust variance-regularized risk minimization with concomitant scaling

Under losses which are potentially heavy-tailed, we consider the task of...

Please sign up or login with your details

Forgot password? Click here to reset