Extreme Network Compression via Filter Group Approximation

by   Bo Peng, et al.

In this paper we propose a novel decomposition method based on filter group approximation, which can significantly reduce the redundancy of deep convolutional neural networks (CNNs) while maintaining the majority of feature representation. Unlike other low-rank decomposition algorithms which operate on spatial or channel dimension of filters, our proposed method mainly focuses on exploiting the filter group structure for each layer. For several commonly used CNN models, including VGG and ResNet, our method can reduce over 80 floating-point operations (FLOPs) with less accuracy drop than state-of-the-art methods on various image classification datasets. Besides, experiments demonstrate that our method is conducive to alleviating degeneracy of the compressed network, which hurts the convergence and performance of the network.


Learning Low-Rank Approximation for CNNs

Low-rank approximation is an effective model compression technique to no...

FSNet: Compression of Deep Convolutional Neural Networks by Filter Summary

We present a novel method of compression of deep Convolutional Neural Ne...

Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition

Modern Convolutional Neural Network (CNN) architectures, despite their s...

Joint Matrix Decomposition for Deep Convolutional Neural Networks Compression

Deep convolutional neural networks (CNNs) with a large number of paramet...

FILTRA: Rethinking Steerable CNN by Filter Transform

Steerable CNN imposes the prior knowledge of transformation invariance o...

Please sign up or login with your details

Forgot password? Click here to reset