Deep learning generalizes because the parameter-function map is biased towards simple functions
Deep neural networks generalize remarkably well without explicit regularization even in the strongly over-parametrized regime. This success suggests that some form of implicit regularization must be at work. By applying a modified version of the coding theorem from algorithmic information theory and by performing extensive empirical analysis of random neural networks, we argue that the parameter function map of deep neural networks is exponentially biased towards functions with lower descriptional complexity. We show explicitly for supervised learning of Boolean functions that the intrinsic simplicity bias of deep neural networks means that they generalize significantly better than an unbiased learning algorithm does. The superior generalization due to simplicity bias can be explained using PAC-Bayes theory, which yields useful generalization error bounds for learning Boolean functions with a wide range of complexities. Finally, we provide evidence that deeper neural networks trained on the CIFAR10 data set exhibit stronger simplicity bias than shallow networks do, which may help explain why deeper networks generalize better than shallow ones do.
READ FULL TEXT