Out-of-the-box channel pruned networks

by   Ragav Venkatesan, et al.

In the last decade convolutional neural networks have become gargantuan. Pre-trained models, when used as initializers are able to fine-tune ever larger networks on small datasets. Consequently, not all the convolutional features that these fine-tuned models detect are requisite for the end-task. Several works of channel pruning have been proposed to prune away compute and memory from models that were trained already. Typically, these involve policies that decide which and how many channels to remove from each layer leading to channel-wise and/or layer-wise pruning profiles, respectively. In this paper, we conduct several baseline experiments and establish that profiles from random channel-wise pruning policies are as good as metric-based ones. We also establish that there may exist profiles from some layer-wise pruning policies that are measurably better than common baselines. We then demonstrate that the top layer-wise pruning profiles found using an exhaustive random search from one datatset are also among the top profiles for other datasets. This implies that we could identify out-of-the-box layer-wise pruning profiles using benchmark datasets and use these directly for new datasets. Furthermore, we develop a Reinforcement Learning (RL) policy-based search algorithm with a direct objective of finding transferable layer-wise pruning profiles using many models for the same architecture. We use a novel reward formulation that drives this RL search towards an expected compression while maximizing accuracy. Our results show that our transferred RL-based profiles are as good or better than best profiles found on the original dataset via exhaustive search. We then demonstrate that if we found the profiles using a mid-sized dataset such as Cifar10/100, we are able to transfer them to even a large dataset such as Imagenet.


page 1

page 2

page 3

page 4


ABCP: Automatic Block-wise and Channel-wise Network Pruning via Joint Search

Currently, an increasing number of model pruning methods are proposed to...

C2S2: Cost-aware Channel Sparse Selection for Progressive Network Pruning

This paper describes a channel-selection approach for simplifying deep n...

Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization

Neural architecture search (NAS) and network pruning are widely studied ...

Deep Model Compression via Deep Reinforcement Learning

Besides accuracy, the storage of convolutional neural networks (CNN) mod...

The Heterogeneity Hypothesis: Finding Layer-Wise Dissimilated Network Architecture

In this paper, we tackle the problem of convolutional neural network des...

Layer Pruning for Accelerating Very Deep Neural Networks

In this paper, we propose an adaptive pruning method. This method can cu...

Site-specific Deep Learning Path Loss Models based on the Method of Moments

This paper describes deep learning models based on convolutional neural ...

Please sign up or login with your details

Forgot password? Click here to reset