Feature Learning in L_2-regularized DNNs: Attraction/Repulsion and Sparsity

by   Arthur Jacot, et al.

We study the loss surface of DNNs with L_2 regularization. We show that the loss in terms of the parameters can be reformulated into a loss in terms of the layerwise activations Z_ℓ of the training set. This reformulation reveals the dynamics behind feature learning: each hidden representations Z_ℓ are optimal w.r.t. to an attraction/repulsion problem and interpolate between the input and output representations, keeping as little information from the input as necessary to construct the activation of the next layer. For positively homogeneous non-linearities, the loss can be further reformulated in terms of the covariances of the hidden representations, which takes the form of a partially convex optimization over a convex cone. This second reformulation allows us to prove a sparsity result for homogeneous DNNs: any local minimum of the L_2-regularized loss can be achieved with at most N(N+1) neurons in each hidden layer (where N is the size of the training set). We show that this bound is tight by giving an example of a local minimum which requires N^2/4 hidden neurons. But we also observe numerically that in more traditional settings much less than N^2 neurons are required to reach the minima.


page 1

page 2

page 3

page 4


Deep linear neural networks with arbitrary loss: All local minima are global

We consider deep linear networks with arbitrary differentiable loss. We ...

On Feature Learning in Neural Networks with Global Convergence Guarantees

We study the optimization of wide neural networks (NNs) via gradient flo...

No bad local minima: Data independent training error guarantees for multilayer neural networks

We use smoothed analysis techniques to provide guarantees on the trainin...

On the Generalization Power of the Overfitted Three-Layer Neural Tangent Kernel Model

In this paper, we study the generalization performance of overparameteri...

Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-Layer Networks

We develop exact representations of two layer neural networks with recti...

Exponentially vanishing sub-optimal local minima in multilayer neural networks

Background: Statistical mechanics results (Dauphin et al. (2014); Chorom...

Engineering Monosemanticity in Toy Models

In some neural networks, individual neurons correspond to natural “featu...

Please sign up or login with your details

Forgot password? Click here to reset