Balancedness and Alignment are Unlikely in Linear Neural Networks
We study the invariance properties of alignment in linear neural networks under gradient descent. Alignment of weight matrices is a form of implicit regularization, and previous works have studied this phenomenon in fully connected networks with 1-dimensional outputs. In such networks, we prove that there exists an initialization such that adjacent layers remain aligned throughout training under any real-valued loss function. We then define alignment for fully connected networks with multidimensional outputs and prove that it generally cannot be an invariant for such networks under the squared loss. Moreover, we characterize the datasets under which alignment is possible. We then analyze networks with layer constraints such as convolutional networks. In particular, we prove that gradient descent is equivalent to projected gradient descent, and show that alignment is impossible given sufficiently large datasets. Importantly, since our definition of alignment is a relaxation of balancedness, our negative results extend to this property.
READ FULL TEXT