To filter prune, or to layer prune, that is the question

07/11/2020
by   Sara Elkerdawy, et al.
37

Recent advances in pruning of neural networks have made it possible to remove a large number of filters or weights without any perceptible drop in accuracy. The number of parameters and that of FLOPs are usually the reported metrics to measure the quality of the pruned models. However, the gain in speed for these pruned methods is often overlooked in the literature due to the complex nature of latency measurements. In this paper, we show the limitation of filter pruning methods in terms of latency reduction and propose LayerPrune framework. LayerPrune presents set of layer pruning methods based on different criteria that achieve higher latency reduction than filter pruning methods on similar accuracy. The advantage of layer pruning over filter pruning in terms of latency reduction is a result of the fact that the former is not constrained by the original model's depth and thus allows for a larger range of latency reduction. For each filter pruning method we examined, we use the same filter importance criterion to calculate a per-layer importance score in one-shot. We then prune the least important layers and fine-tune the shallower model which obtains comparable or better accuracy than its filter-based pruning counterpart. This one-shot process allows to remove layers from single path networks like VGG before fine-tuning, unlike in iterative filter pruning, a minimum number of filters per layer is required to allow for data flow which constraint the search space. To the best of our knowledge, we are the first to examine the effect of pruning methods on latency metric instead of FLOPs for multiple networks, datasets and hardware targets. LayerPrune also outperforms handcrafted architectures such as Shufflenet, MobileNet, MNASNet and ResNet18 by 7.3 dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2020

Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed

Filter pruning has drawn more attention since resource constrained platf...
research
04/13/2023

Structured Pruning for Multi-Task Deep Neural Networks

Although multi-task deep neural network (DNN) models have computation an...
research
07/04/2020

Weight-dependent Gates for Network Pruning

In this paper, we propose a simple and effective network pruning framewo...
research
03/19/2019

Max-plus Operators Applied to Filter Selection and Model Pruning in Neural Networks

Following recent advances in morphological neural networks, we propose t...
research
10/20/2021

HALP: Hardware-Aware Latency Pruning

Structural pruning can simplify network architecture and improve inferen...
research
05/23/2023

Layer-adaptive Structured Pruning Guided by Latency

Structured pruning can simplify network architecture and improve inferen...
research
10/13/2022

Structural Pruning via Latency-Saliency Knapsack

Structural pruning can simplify network architecture and improve inferen...

Please sign up or login with your details

Forgot password? Click here to reset