Locality-Promoting Representation Learning

05/25/2019
by   Johannes Schneider, et al.
0

This work investigates fundamental questions related to locating and defining features in convolutional neural networks (CNN). The theoretical investigations guided by the locality principle show that the relevance of locations within a representation decreases with distance from the center. This is aligned with empirical findings across multiple architectures such as VGG, ResNet, Inception, DenseNet and MobileNet. To leverage our insights, we introduce Locality-promoting Regularization (LOCO-REG). It yields accuracy gains across multiple architectures and datasets.

READ FULL TEXT
research
02/05/2018

A Measurement Theory of Locality

Locality is a fundamental principle used extensively in program and syst...
research
05/07/2021

Maximally Recoverable Codes with Hierarchical Locality: Constructions and Field-Size Bounds

Maximally recoverable codes are a class of codes which recover from all ...
research
12/20/2019

Locality and compositionality in zero-shot learning

In this work we study locality and compositionality in the context of le...
research
06/06/2019

Covariance in Physics and Convolutional Neural Networks

In this proceeding we give an overview of the idea of covariance (or equ...
research
07/12/2020

Locality Guided Neural Networks for Explainable Artificial Intelligence

In current deep network architectures, deeper layers in networks tend to...
research
02/07/2018

ShakeDrop regularization

This paper proposes a powerful regularization method named ShakeDrop reg...
research
08/15/2021

CONet: Channel Optimization for Convolutional Neural Networks

Neural Architecture Search (NAS) has shifted network design from using h...

Please sign up or login with your details

Forgot password? Click here to reset