Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

06/19/2021
by   Shiwei Liu, et al.
5

Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter category of methods usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. Based on the insights from pruning plasticity, we design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (GraNet), and its dynamic sparse training (DST) variant (GraNet-ST). Both of them advance state of the art. Perhaps most impressively, the latter for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods by a large margin with ResNet-50 on ImageNet. We will release all codes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2023

Exploring the Performance of Pruning Methods in Neural Networks: An Empirical Study of the Lottery Ticket Hypothesis

In this paper, we explore the performance of different pruning methods i...
research
06/21/2023

Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training

Dynamic Sparse Training (DST) is a rapidly evolving area of research tha...
research
10/22/2021

When to Prune? A Policy towards Early Structural Pruning

Pruning enables appealing reductions in network memory footprint and tim...
research
10/22/2020

PHEW: Paths with higher edge-weights give "winning tickets" without training data

Sparse neural networks have generated substantial interest recently beca...
research
03/11/2021

Emerging Paradigms of Neural Network Pruning

Over-parameterization of neural networks benefits the optimization and g...
research
07/16/2020

Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning

We analyse the pruning procedure behind the lottery ticket hypothesis ar...
research
09/13/2022

One-shot Network Pruning at Initialization with Discriminative Image Patches

One-shot Network Pruning at Initialization (OPaI) is an effective method...

Please sign up or login with your details

Forgot password? Click here to reset