Convolution with even-sized kernels and symmetric padding

03/20/2019
by   Shuang Wu, et al.
0

Compact convolutional neural networks gain efficiency mainly through depthwise convolutions, expanded channels and complex topologies, which contrarily aggravate the training efforts. In this work, we identify the shift problem occurs in even-sized kernel (2x2, 4x4) convolutions, and eliminate it by proposing symmetric padding on each side of the feature maps (C2sp, C4sp). Symmetric padding enlarges the receptive fields of even-sized kernels with little computational cost. In classification tasks, C2sp outperforms the conventional 3x3 convolution and obtains comparable accuracies to existing compact convolution blocks, but consumes less memory and time during training. In generation tasks, C2sp and C4sp both achieve improved image qualities and stabilized training. Symmetric padding coupled with even-sized convolution is easy to be implemented into deep learning frameworks, providing promising building units for architecture designs that emphasize training efforts on online and continual learning occasions.

READ FULL TEXT

page 3

page 7

research
02/27/2020

XSepConv: Extremely Separated Convolution

Depthwise convolution has gradually become an indispensable operation fo...
research
09/26/2021

Group Shift Pointwise Convolution for Volumetric Medical Image Segmentation

Recent studies have witnessed the effectiveness of 3D convolutions on se...
research
05/23/2018

Use of symmetric kernels for convolutional neural networks

At this work we introduce horizontally symmetric convolutional kernels f...
research
02/20/2019

Spatially-Adaptive Filter Units for Compact and Efficient Deep Neural Networks

Convolutional neural networks excel in a number of computer vision tasks...
research
07/03/2019

The Indirect Convolution Algorithm

Deep learning frameworks commonly implement convolution operators with G...
research
10/09/2020

Continual learning using hash-routed convolutional neural networks

Continual learning could shift the machine learning paradigm from data c...
research
05/22/2020

Arbitrary-sized Image Training and Residual Kernel Learning: Towards Image Fraud Identification

Preserving original noise residuals in images are critical to image frau...

Please sign up or login with your details

Forgot password? Click here to reset