A Statistical Learning Assessment of Huber Regression

09/27/2020
by   Yunlong Feng, et al.
0

As one of the triumphs and milestones of robust statistics, Huber regression plays an important role in robust inference and estimation. It has also been finding a great variety of applications in machine learning. In a parametric setup, it has been extensively studied. However, in the statistical learning context where a function is typically learned in a nonparametric way, there is still a lack of theoretical understanding of how Huber regression estimators learn the conditional mean function and why it works in the absence of light-tailed noise assumptions. To address these fundamental questions, we conduct an assessment of Huber regression from a statistical learning viewpoint. First, we show that the usual risk consistency property of Huber regression estimators, which is usually pursued in machine learning, cannot guarantee their learnability in mean regression. Second, we argue that Huber regression should be implemented in an adaptive way to perform mean regression, implying that one needs to tune the scale parameter in accordance with the sample size and the moment condition of the noise. Third, with an adaptive choice of the scale parameter, we demonstrate that Huber regression estimators can be asymptotic mean regression calibrated under (1+ϵ)-moment conditions (ϵ>0). Last but not least, under the same moment conditions, we establish almost sure convergence rates for Huber regression estimators. Note that the (1+ϵ)-moment conditions accommodate the special case where the response variable possesses infinite variance and so the established convergence rates justify the robustness feature of Huber regression estimators. In the above senses, the present study provides a systematic statistical learning assessment of Huber regression estimators and justifies their merits in terms of robustness from a theoretical viewpoint.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2020

New Insights into Learning with Correntropy Based Regression

Stemming from information-theoretic learning, the correntropy criterion ...
research
03/01/2018

Learning with Correntripy-induced Losses for Regression with Mixture of Symmetric Stable Noise

In recent years, correntropy and its applications in machine learning ha...
research
12/09/2021

Regularized Modal Regression on Markov-dependent Observations: A Theoretical Assessment

Modal regression, a widely used regression protocol, has been extensivel...
research
03/01/2018

Learning with Correntropy-induced Losses for Regression with Mixture of Symmetric Stable Noise

In recent years, correntropy and its applications in machine learning ha...
research
07/11/2023

Semiparametric Shape-restricted Estimators for Nonparametric Regression

Estimating the conditional mean function that relates predictive covaria...
research
01/10/2022

Non-Asymptotic Guarantees for Robust Statistical Learning under (1+ε)-th Moment Assumption

There has been a surge of interest in developing robust estimators for m...
research
05/07/2018

Robustness of shape-restricted regression estimators: an envelope perspective

Classical least squares estimators are well-known to be robust with resp...

Please sign up or login with your details

Forgot password? Click here to reset