Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff

05/30/2023
by   Arthur Jacot, et al.
0

Previous work has shown that DNNs with large depth L and L_2-regularization are biased towards learning low-dimensional representations of the inputs, which can be interpreted as minimizing a notion of rank R^(0)(f) of the learned function f, conjectured to be the Bottleneck rank. We compute finite depth corrections to this result, revealing a measure R^(1) of regularity which bounds the pseudo-determinant of the Jacobian |Jf(x)|_+ and is subadditive under composition and addition. This formalizes a balance between learning low-dimensional representations and minimizing complexity/irregularity in the feature maps, allowing the network to learn the `right' inner dimension. We also show how large learning rates also control the regularity of the learned function. Finally, we use these theoretical tools to prove the conjectured bottleneck structure in the learned features as L→∞: for large depths, almost all hidden representations concentrates around R^(0)(f)-dimensional representations. These limiting low-dimensional representation can be described using the second correction R^(2).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset