Compressibility and Generalization in Large-Scale Deep Learning

04/16/2018
by   Wenda Zhou, et al.
0

Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be "compressed" to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size. Combined with off-the-shelf compression algorithms, the bound leads to state of the art generalization guarantees; in particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. As additional evidence connecting compression and generalization, we show that compressibility of models that tend to overfit is limited: We establish an absolute limit on expected compressibility as a function of expected generalization error, where the expectations are over the random choice of training examples. The bounds are complemented by empirical results that show an increase in overfitting implies an increase in the number of bits required to describe a trained network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2019

Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network

One of biggest issues in deep learning theory is its generalization abil...
research
11/24/2022

PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization

While there has been progress in developing non-vacuous generalization b...
research
06/15/2021

Compression Implies Generalization

Explaining the surprising generalization performance of deep neural netw...
research
01/14/2020

Understanding Generalization in Deep Learning via Tensor Methods

Deep neural networks generalize well on unseen data though the number of...
research
01/06/2022

Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

In this paper we propose to study generalization of neural networks on s...
research
10/16/2020

Failures of model-dependent generalization bounds for least-norm interpolation

We consider bounds on the generalization performance of the least-norm l...
research
02/21/2019

Convolutional Analysis Operator Learning: Dependence on Training Data

Convolutional analysis operator learning (CAOL) enables the unsupervised...

Please sign up or login with your details

Forgot password? Click here to reset