Do Compressed Representations Generalize Better?

09/20/2019
by   Hassan Hafez-Kolahi, et al.
0

One of the most studied problems in machine learning is finding reasonable constraints that guarantee the generalization of a learning algorithm. These constraints are usually expressed as some simplicity assumptions on the target. For instance, in the Vapnik-Chervonenkis (VC) theory the space of possible hypotheses is considered to have a limited VC dimension. In this paper, the constraint on the entropy H(X) of the input variable X is studied as a simplicity assumption. It is proven that the sample complexity to achieve an ϵ-δ Probably Approximately Correct (PAC) hypothesis is bounded by 2^.2H(X)/ϵ.+log1/δ/ϵ^2 which is sharp up to the 1/ϵ^2 factor. Morever, it is shown that if a feature learning process is employed to learn the compressed representation from the dataset, this bound no longer exists. These findings have important implications on the Information Bottleneck (IB) theory which had been utilized to explain the generalization power of Deep Neural Networks (DNNs), but its applicability for this purpose is currently under debate by researchers. In particular, this is a rigorous proof for the previous heuristic that compressed representations are expnentially easier to be learned. However, our analysis pinpoints two factors preventing the IB, in its current form, to be applicable in studying neural networks. Firstly, the exponential dependence of sample complexity on 1/ϵ, which can lead to a dramatic effect on the bounds in practical applications when ϵ is small. Secondly, our analysis reveals that arguments based on input compression are inherently insufficient to explain generalization of methods like DNNs in which the features are also learned using available data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2018

Generalization Bounds for Neural Networks: Kernels, Symmetry, and Sample Compression

Though Deep Neural Networks (DNNs) are widely celebrated for their pract...
research
05/24/2020

Proper Learning, Helly Number, and an Optimal SVM Bound

The classical PAC sample complexity bounds are stated for any Empirical ...
research
05/22/2018

Deep learning generalizes because the parameter-function map is biased towards simple functions

Deep neural networks generalize remarkably well without explicit regular...
research
05/11/2023

Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks

One of the central questions in the theory of deep learning is to unders...
research
10/15/2019

REVE: Regularizing Deep Learning with Variational Entropy Bound

Studies on generalization performance of machine learning algorithms und...
research
03/28/2023

Learnability, Sample Complexity, and Hypothesis Class Complexity for Regression Models

The goal of a learning algorithm is to receive a training data set as in...
research
02/21/2018

Generalization in Machine Learning via Analytical Learning Theory

This paper introduces a novel measure-theoretic learning theory to analy...

Please sign up or login with your details

Forgot password? Click here to reset