Neural network relief: a pruning algorithm based on neural activity

09/22/2021
by   Aleksandr Dekhovich, et al.
0

Current deep neural networks (DNNs) are overparameterized and use most of their neuronal connections during inference for each task. The human brain, however, developed specialized regions for different tasks and performs inference with a small fraction of its neuronal connections. We propose an iterative pruning strategy introducing a simple importance-score metric that deactivates unimportant connections, tackling overparameterization in DNNs and modulating the firing patterns. The aim is to find the smallest number of connections that is still capable of solving a given task with comparable accuracy, i.e. a simpler subnetwork. We achieve comparable performance for LeNet architectures on MNIST, and significantly higher parameter compression than state-of-the-art algorithms for VGG and ResNet architectures on CIFAR-10/100 and Tiny-ImageNet. Our approach also performs well for the two different optimizers considered – Adam and SGD. The algorithm is not designed to minimize FLOPs when considering current hardware and software implementations, although it performs reasonably when compared to the state of the art.

READ FULL TEXT

page 6

page 8

page 9

research
11/09/2019

Hardware-aware Pruning of DNNs using LFSR-Generated Pseudo-Random Indices

Deep neural networks (DNNs) have been emerged as the state-of-the-art al...
research
08/09/2022

Continual Prune-and-Select: Class-incremental learning with specialized subnetworks

The human brain is capable of learning tasks sequentially mostly without...
research
03/05/2020

Permute to Train: A New Dimension to Training Deep Neural Networks

We show that Deep Neural Networks (DNNs) can be efficiently trained by p...
research
04/23/2021

Partitioning sparse deep neural networks for scalable training and inference

The state-of-the-art deep neural networks (DNNs) have significant comput...
research
07/09/2020

Learning to Prune Deep Neural Networks via Reinforcement Learning

This paper proposes PuRL - a deep reinforcement learning (RL) based algo...
research
08/28/2020

MCMIA: Model Compression Against Membership Inference Attack in Deep Neural Networks

Deep learning or deep neural networks (DNNs) have nowadays enabled high ...
research
05/28/2019

Towards Efficient Neural Networks On-a-chip: Joint Hardware-Algorithm Approaches

Machine learning algorithms have made significant advances in many appli...

Please sign up or login with your details

Forgot password? Click here to reset