Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks

by   Yochai Zur, et al.

Recently, deep learning has become a de facto standard in machine learning with convolutional neural networks (CNNs) demonstrating spectacular success on a wide variety of tasks. However, CNNs are typically very demanding computationally at inference time. One of the ways to alleviate this burden on certain hardware platforms is quantization relying on the use of low-precision arithmetic representation for the weights and the activations. Another popular method is the pruning of the number of filters in each layer. While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training. In this paper, we formulate optimal arithmetic bit length allocation and neural network pruning as a NAS problem, searching for the configurations satisfying a computational complexity budget while maximizing the accuracy. We use a differentiable search method based on the continuous relaxation of the search space proposed by Liu et al. (arXiv:1806.09055). We show, by grid search, that heterogeneous quantized networks suffer from a high variance which renders the benefit of the search questionable. For pruning, improvement over homogeneous cases is possible, but it is still challenging to find those configurations with the proposed method. The code is publicly available at and .


page 13

page 14


Dynamic Distribution Pruning for Efficient Network Architecture Search

Network architectures obtained by Neural Architecture Search (NAS) have ...

Neural Architecture Search without Training

The time and effort involved in hand-designing deep neural networks is i...

The UniNAS framework: combining modules in arbitrarily complex configurations with argument trees

Designing code to be simplistic yet to offer choice is a tightrope walk....

Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio

Automatic designing computationally efficient neural networks has receiv...

LeGR: Filter Pruning via Learned Global Ranking

Filter pruning has shown to be effective for learning resource-constrain...

Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values

To obtain good performance, convolutional neural networks are usually ov...

DHP: Differentiable Meta Pruning via HyperNetworks

Network pruning has been the driving force for the efficient inference o...

Please sign up or login with your details

Forgot password? Click here to reset