Principal Component Networks: Parameter Reduction Early in Training

by   Roger Waleffe, et al.

Recent works show that overparameterized networks contain small subnetworks that exhibit comparable accuracy to the full model when trained in isolation. These results highlight the potential to reduce training costs of deep neural networks without sacrificing generalization performance. However, existing approaches for finding these small networks rely on expensive multi-round train-and-prune procedures and are non-practical for large data sets and models. In this paper, we show how to find small networks that exhibit the same performance as their overparameterized counterparts after only a few training epochs. We find that hidden layer activations in overparameterized networks exist primarily in subspaces smaller than the actual model width. Building on this observation, we use PCA to find a basis of high variance for layer inputs and represent layer weights using these directions. We eliminate all weights not relevant to the found PCA basis and term these network architectures Principal Component Networks. On CIFAR-10 and ImageNet, we show that PCNs train faster and use less energy than overparameterized models, without accuracy loss. We find that our transformation leads to networks with up to 23.8x fewer parameters, with equal or higher end-model accuracy—in some cases we observe improvements up to 3 ResNet-110 networks while training faster.


page 1

page 2

page 3

page 4


The Lottery Ticket Hypothesis: Finding Small, Trainable Neural Networks

Neural network compression techniques are able to reduce the parameter c...

Test-Time Adaptation with Principal Component Analysis

Machine Learning models are prone to fail when test data are different f...

Self-adaptive node-based PCA encodings

In this paper we propose an algorithm, Simple Hebbian PCA, and prove tha...

Training Convolutional Neural Networks With Hebbian Principal Component Analysis

Recent work has shown that biologically plausible Hebbian learning can b...

Modeling of Individual HRTFs based on Spatial Principal Component Analysis

Head-related transfer function (HRTF) plays an important role in the con...

Simultaneous Training of Partially Masked Neural Networks

For deploying deep learning models to lower end devices, it is necessary...

Deep equilibrium models as estimators for continuous latent variables

Principal Component Analysis (PCA) and its exponential family extensions...

Please sign up or login with your details

Forgot password? Click here to reset