Risk Bounds for Robust Deep Learning

09/14/2020
by   Johannes Lederer, et al.
0

It has been observed that certain loss functions can render deep-learning pipelines robust against flaws in the data. In this paper, we support these empirical findings with statistical theory. We especially show that empirical-risk minimization with unbounded, Lipschitz-continuous loss functions, such as the least-absolute deviation loss, Huber loss, Cauchy loss, and Tukey's biweight loss, can provide efficient prediction under minimal assumptions on the data. More generally speaking, our paper provides theoretical evidence for the benefits of robust loss functions in deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2017

Uniform Deviation Bounds for Unbounded Loss Functions like k-Means

Uniform deviation bounds limit the difference between a model's expected...
research
09/07/2016

Chaining Bounds for Empirical Risk Minimization

This paper extends the standard chaining technique to prove excess risk ...
research
03/12/2020

Benign overfitting in the large deviation regime

We investigate the benign overfitting phenomenon in the large deviation ...
research
06/17/2020

Regularized ERM on random subspaces

We study a natural extension of classical empirical risk minimization, w...
research
05/11/2021

Spectral risk-based learning using unbounded losses

In this work, we consider the setting of learning problems under a wide ...
research
02/27/2018

Multi-Observation Regression

Recent work introduced loss functions which measure the error of a predi...
research
03/09/2020

Risk Analysis of Divide-and-Conquer ERM

Theoretical analysis of the divide-and-conquer based distributed learnin...

Please sign up or login with your details

Forgot password? Click here to reset