Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

04/10/2019
by   Yunpeng Chen, et al.
0

In natural images, information is conveyed at different frequencies where higher frequencies are usually encoded with fine details and lower frequencies are usually encoded with global structures. Similarly, the output feature maps of a convolution layer can also be seen as a mixture of information at different frequencies. In this work, we propose to factorize the mixed feature maps by their frequencies and design a novel Octave Convolution (OctConv) operation to store and process feature maps that vary spatially "slower" at a lower spatial resolution reducing both memory and computation cost. Unlike existing multi-scale meth-ods, OctConv is formulated as a single, generic, plug-and-play convolutional unit that can be used as a direct replacement of (vanilla) convolutions without any adjustments in the network architecture. It is also orthogonal and complementary to methods that suggest better topologies or reduce channel-wise redundancy like group or depth-wise convolutions. We experimentally show that by simply replacing con-volutions with OctConv, we can consistently boost accuracy for both image and video recognition tasks, while reducing memory and computational cost. An OctConv-equipped ResNet-152 can achieve 82.9 GFLOPs.

READ FULL TEXT
research
11/01/2019

Comb Convolution for Efficient Convolutional Architecture

Convolutional neural networks (CNNs) are inherently suffering from massi...
research
10/31/2019

Multi-scale Octave Convolutions for Robust Speech Recognition

We propose a multi-scale octave convolution layer to learn robust speech...
research
03/16/2020

SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Weights Flipping

The channel redundancy in feature maps of convolutional neural networks ...
research
06/22/2020

Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution

Many effective solutions have been proposed to reduce the redundancy of ...
research
07/02/2020

Channel Compression: Rethinking Information Redundancy among Channels in CNN Architecture

Model compression and acceleration are attracting increasing attentions ...
research
04/29/2021

Condensation-Net: Memory-Efficient Network Architecture with Cross-Channel Pooling Layers and Virtual Feature Maps

"Lightweight convolutional neural networks" is an important research top...
research
09/27/2019

A closer look at network resolution for efficient network design

There is growing interest in designing lightweight neural networks for m...

Please sign up or login with your details

Forgot password? Click here to reset