Exploiting Learned Symmetries in Group Equivariant Convolutions

06/09/2021
by   Attila Lengyel, et al.
0

Group Equivariant Convolutions (GConvs) enable convolutional neural networks to be equivariant to various transformation groups, but at an additional parameter and compute cost. We investigate the filter parameters learned by GConvs and find certain conditions under which they become highly redundant. We show that GConvs can be efficiently decomposed into depthwise separable convolutions while preserving equivariance properties and demonstrate improved performance and data efficiency on two datasets. All code is publicly available at github.com/Attila94/SepGrouPy.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
10/25/2021

Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups

Group convolutional neural networks (G-CNNs) have been shown to increase...
research
07/08/2020

Dynamic Group Convolution for Accelerating Convolutional Neural Networks

Replacing normal convolutions with group convolutions can significantly ...
research
06/03/2019

Separable Layers Enable Structured Efficient Linear Substitutions

In response to the development of recent efficient dense layers, this pa...
research
09/24/2019

Scale-Equivariant Neural Networks with Decomposed Convolutional Filters

Encoding the input scale information explicitly into the representation ...
research
07/20/2020

PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions

Recent research has shown that incorporating equivariance into neural ne...
research
11/25/2017

CondenseNet: An Efficient DenseNet using Learned Group Convolutions

Deep neural networks are increasingly used on mobile devices, where comp...
research
07/10/2020

Conditioned Time-Dilated Convolutions for Sound Event Detection

Sound event detection (SED) is the task of identifying sound events alon...

Please sign up or login with your details

Forgot password? Click here to reset