Data Parallelism in Training Sparse Neural Networks

03/25/2020
by   Namhoon Lee, et al.
13

Network pruning is an effective methodology to compress large neural networks, and sparse neural networks obtained by pruning can benefit from their reduced memory and computational costs at use. Notably, recent advances have found that it is possible to find a trainable sparse neural network even at random initialization prior to training; hence the obtained sparse network only needs to be trained. While this approach of pruning at initialization turned out to be highly effective, little has been studied about the training aspects of these sparse neural networks. In this work, we focus on measuring the effects of data parallelism on training sparse neural networks. As a result, we find that the data parallelism in training sparse neural networks is no worse than that in training densely parameterized neural networks, despite the general difficulty of training sparse neural networks. When training sparse networks using SGD with momentum, the breakdown of the perfect scaling regime occurs even much later than the dense at large batch sizes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2020

Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win

Sparse Neural Networks (NNs) can match the generalization of dense NNs u...
research
06/25/2019

The Difficulty of Training Sparse Neural Networks

We investigate the difficulties of training sparse neural networks and m...
research
11/30/2020

Deconstructing the Structure of Sparse Neural Networks

Although sparse neural networks have been studied extensively, the focus...
research
06/15/2020

Finding trainable sparse networks through Neural Tangent Transfer

Deep neural networks have dramatically transformed machine learning, but...
research
10/28/2019

Evaluating Lottery Tickets Under Distributional Shifts

The Lottery Ticket Hypothesis suggests large, over-parameterized neural ...
research
01/25/2023

When Layers Play the Lottery, all Tickets Win at Initialization

Pruning is a standard technique for reducing the computational cost of d...
research
03/11/2021

Emerging Paradigms of Neural Network Pruning

Over-parameterization of neural networks benefits the optimization and g...

Please sign up or login with your details

Forgot password? Click here to reset