A trainable monogenic ConvNet layer robust in front of large contrast changes in image classification

09/14/2021
by   E. Ulises Moya-Sánchez, et al.
0

Convolutional Neural Networks (ConvNets) at present achieve remarkable performance in image classification tasks. However, current ConvNets cannot guarantee the capabilities of the mammalian visual systems such as invariance to contrast and illumination changes. Some ideas to overcome the illumination and contrast variations usually have to be tuned manually and tend to fail when tested with other types of data degradation. In this context, we present a new bio-inspired entry layer, M6, which detects low-level geometric features (lines, edges, and orientations) which are similar to patterns detected by the V1 visual cortex. This new trainable layer is capable of coping with image classification even with large contrast variations. The explanation for this behavior is the monogenic signal geometry, which represents each pixel value in a 3D space using quaternions, a fact that confers a degree of explainability to the networks. We compare M6 with a conventional convolutional layer (C) and a deterministic quaternion local phase layer (Q9). The experimental setup is designed to evaluate the robustness of our M6 enriched ConvNet model and includes three architectures, four datasets, three types of contrast degradation (including non-uniform haze degradations). The numerical results reveal that the models with M6 are the most robust in front of any kind of contrast variations. This amounts to a significant enhancement of the C models, which usually have reasonably good performance only when the same training and test degradation are used, except for the case of maximum degradation. Moreover, the Structural Similarity Index Measure (SSIM) is used to analyze and explain the robustness effect of the M6 feature maps under any kind of contrast degradations.

READ FULL TEXT

page 7

page 10

research
02/18/2019

LocalNorm: Robust Image Classification through Dynamically Regularized Normalization

While modern convolutional neural networks achieve outstanding accuracy ...
research
04/02/2023

CNNs with Multi-Level Attention for Domain Generalization

In the past decade, deep convolutional neural networks have achieved sig...
research
02/15/2022

A precortical module for robust CNNs to light variations

We present a simple mathematical model for the mammalian low visual path...
research
01/27/2020

Depthwise-STFT based separable Convolutional Neural Networks

In this paper, we propose a new convolutional layer called Depthwise-STF...
research
10/05/2021

DA-DRN: Degradation-Aware Deep Retinex Network for Low-Light Image Enhancement

Images obtained in real-world low-light conditions are not only low in b...
research
01/31/2021

Spectral Roll-off Points: Estimating Useful Information Under the Basis of Low-frequency Data Representations

Useful information is the basis for model decisions. Estimating useful i...
research
09/05/2018

How is Contrast Encoded in Deep Neural Networks?

Contrast is a crucial factor in visual information processing. It is des...

Please sign up or login with your details

Forgot password? Click here to reset