Contextually Guided Convolutional Neural Networks for Learning Most Transferable Representations

03/02/2021
by   Olcay Kursun, et al.
16

Deep Convolutional Neural Networks (CNNs), trained extensively on very large labeled datasets, learn to recognize inferentially powerful features in their input patterns and represent efficiently their objective content. Such objectivity of their internal representations enables deep CNNs to readily transfer and successfully apply these representations to new classification tasks. Deep CNNs develop their internal representations through a challenging process of error backpropagation-based supervised training. In contrast, deep neural networks of the cerebral cortex develop their even more powerful internal representations in an unsupervised process, apparently guided at a local level by contextual information. Implementing such local contextual guidance principles in a single-layer CNN architecture, we propose an efficient algorithm for developing broad-purpose representations (i.e., representations transferable to new tasks without additional training) in shallow CNNs trained on limited-size datasets. A contextually guided CNN (CG-CNN) is trained on groups of neighboring image patches picked at random image locations in the dataset. Such neighboring patches are likely to have a common context and therefore are treated for the purposes of training as belonging to the same class. Across multiple iterations of such training on different context-sharing groups of image patches, CNN features that are optimized in one iteration are then transferred to the next iteration for further optimization, etc. In this process, CNN features acquire higher pluripotency, or inferential utility for any arbitrary classification task, which we quantify as a transfer utility. In our application to natural images, we find that CG-CNN features show the same, if not higher, transfer utility and classification accuracy as comparable transferable features in the first CNN layer of the well-known deep networks.

READ FULL TEXT

page 4

page 8

page 9

page 10

page 12

page 13

research
12/20/2014

Visualizing and Comparing Convolutional Neural Networks

Convolutional Neural Networks (CNNs) have achieved comparable error rate...
research
09/29/2021

Neural Knitworks: Patched Neural Implicit Representation Networks

Coordinate-based Multilayer Perceptron (MLP) networks, despite being cap...
research
02/27/2015

Modelling Local Deep Convolutional Neural Network Features to Improve Fine-Grained Image Classification

We propose a local modelling approach using deep convolutional neural ne...
research
03/13/2018

Expert identification of visual primitives used by CNNs during mammogram classification

This work interprets the internal representations of deep neural network...
research
05/12/2020

RetinotopicNet: An Iterative Attention Mechanism Using Local Descriptors with Global Context

Convolutional Neural Networks (CNNs) were the driving force behind many ...
research
05/12/2019

DeepIlluminance: Contextual Illuminance Estimation via Deep Neural Networks

Computational color constancy refers to the estimation of the scene illu...
research
03/03/2017

On the Behavior of Convolutional Nets for Feature Extraction

Deep neural networks are representation learning techniques. During trai...

Please sign up or login with your details

Forgot password? Click here to reset