Assessing Generalization of SGD via Disagreement

by   Yiding Jiang, et al.

We empirically show that the test error of deep networks can be estimated by simply training the same architecture on the same training set but with a different run of Stochastic Gradient Descent (SGD), and measuring the disagreement rate between the two networks on unlabeled test data. This builds on – and is a stronger version of – the observation in Nakkiran Bansal '20, which requires the second run to be on an altogether fresh training set. We further theoretically show that this peculiar phenomenon arises from the well-calibrated nature of ensembles of SGD-trained models. This finding not only provides a simple empirical measure to directly predict the test error using unlabeled test data, but also establishes a new conceptual connection between generalization and calibration.


page 1

page 2

page 3

page 4


Bad Global Minima Exist and SGD Can Reach Them

Several recent works have aimed to explain why severely overparameterize...

SGD Implicitly Regularizes Generalization Error

We derive a simple and model-independent formula for the change in the g...

A Note on "Assessing Generalization of SGD via Disagreement"

Jiang et al. (2021) give empirical evidence that the average test error ...

On the different regimes of Stochastic Gradient Descent

Modern deep networks are trained with stochastic gradient descent (SGD) ...

Revisiting Natural Gradient for Deep Networks

We evaluate natural gradient, an algorithm originally proposed in Amari ...

Studying Generalization Through Data Averaging

The generalization of machine learning models has a complex dependence o...

Rip van Winkle's Razor: A Simple Estimate of Overfit to Test Data

Traditional statistics forbids use of test data (a.k.a. holdout data) du...

Please sign up or login with your details

Forgot password? Click here to reset