Failures of model-dependent generalization bounds for least-norm interpolation

10/16/2020
by   Peter L. Bartlett, et al.
0

We consider bounds on the generalization performance of the least-norm linear regressor, in the over-parameterized regime where it can interpolate the data. We describe a sense in which any generalization bound of a type that is commonly proved in statistical learning theory must sometimes be very loose when applied to analyze the least-norm interpolant. In particular, for a variety of natural joint distributions on training examples, any valid generalization bound that depends only on the output of the learning algorithm, the number of training examples, and the confidence parameter, and that satisfies a mild condition (substantially weaker than monotonicity in sample size), must sometimes be very loose—it can be bounded below by a constant when the true excess risk goes to zero.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2022

On the Importance of Gradient Norm in PAC-Bayesian Bounds

Generalization bounds which assess the difference between the true risk ...
research
10/18/2021

Minimum ℓ_1-norm interpolators: Precise asymptotics and multiple descent

An evolving line of machine learning works observe empirical evidence th...
research
02/23/2020

On the generalization of bayesian deep nets for multi-class classification

Generalization bounds which assess the difference between the true risk ...
research
07/23/2012

Generalization Bounds for Metric and Similarity Learning

Recently, metric learning and similarity learning have attracted a large...
research
04/16/2018

Compressibility and Generalization in Large-Scale Deep Learning

Modern neural networks are highly overparameterized, with capacity to su...
research
08/07/2020

Generalization error of minimum weighted norm and kernel interpolation

We study the generalization error of functions that interpolate prescrib...
research
05/11/2014

Learning from networked examples

Many machine learning algorithms are based on the assumption that traini...

Please sign up or login with your details

Forgot password? Click here to reset