On the geometry of generalization and memorization in deep neural networks

by   Cory Stephenson, et al.

Understanding how large neural networks avoid memorizing training data is key to explaining their high generalization performance. To examine the structure of when and where memorization occurs in a deep network, we use a recently developed replica-based mean field theoretic geometric analysis method. We find that all layers preferentially learn from examples which share features, and link this behavior to generalization performance. Memorization predominately occurs in the deeper layers, due to decreasing object manifolds' radius and dimension, whereas early layers are minimally affected. This predicts that generalization can be restored by reverting the final few layer weights to earlier epochs before significant memorization occurred, which is confirmed by the experiments. Additionally, by studying generalization under different model sizes, we reveal the connection between the double descent phenomenon and the underlying model geometry. Finally, analytical analysis shows that networks avoid memorization early in training because close to initialization, the gradient contribution from permuted examples are small. These findings provide quantitative evidence for the structure of memorization across layers of a deep neural network, the drivers for such structure, and its connection to manifold geometric properties.


page 16

page 23


With Greater Distance Comes Worse Performance: On the Perspective of Layer Utilization and Model Generalization

Generalization of deep neural networks remains one of the main open prob...

Limitations of Neural Collapse for Understanding Generalization in Deep Learning

The recent work of Papyan, Han, Donoho (2020) presented an intriguin...

Can Neural Network Memorization Be Localized?

Recent efforts at explaining the interplay of memorization and generaliz...

Are All Layers Created Equal?

Understanding learning and generalization of deep architectures has been...

On the Origins of the Block Structure Phenomenon in Neural Network Representations

Recent work has uncovered a striking phenomenon in large-capacity neural...

Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model

Neural collapse (NC) refers to the surprising structure of the last laye...

Why does CTC result in peaky behavior?

The peaky behavior of CTC models is well known experimentally. However, ...

Please sign up or login with your details

Forgot password? Click here to reset