On the Impact of Interpretability Methods in Active Image Augmentation Method

02/24/2021
by   Flávio Santos, et al.
4

Robustness is a significant constraint in machine learning models. The performance of the algorithms must not deteriorate when training and testing with slightly different data. Deep neural network models achieve awe-inspiring results in a wide range of applications of computer vision. Still, in the presence of noise or region occlusion, some models exhibit inaccurate performance even with data handled in training. Besides, some experiments suggest deep learning models sometimes use incorrect parts of the input information to perform inference. Activate Image Augmentation (ADA) is an augmentation method that uses interpretability methods to augment the training data and improve its robustness to face the described problems. Although ADA presented interesting results, its original version only used the Vanilla Backpropagation interpretability to train the U-Net model. In this work, we propose an extensive experimental analysis of the interpretability method's impact on ADA. We use five interpretability methods: Vanilla Backpropagation, Guided Backpropagation, GradCam, Guided GradCam, and InputXGradient. The results show that all methods achieve similar performance at the ending of training, but when combining ADA with GradCam, the U-Net model presented an impressive fast convergence.

READ FULL TEXT

page 2

page 7

research
02/25/2021

Retrieval Augmentation to Improve Robustness and Interpretability of Deep Neural Networks

Deep neural network models have achieved state-of-the-art results in var...
research
08/30/2023

Interpretability-guided Data Augmentation for Robust Segmentation in Multi-centre Colonoscopy Data

Multi-centre colonoscopy images from various medical centres exhibit dis...
research
03/26/2023

Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability

Data augmentation strategies are actively used when training deep neural...
research
11/19/2020

An Experimental Study of Semantic Continuity for Deep Learning Models

Deep learning models suffer from the problem of semantic discontinuity: ...
research
09/10/2019

Improving the Interpretability of Neural Sentiment Classifiers via Data Augmentation

Recent progress of neural network models has achieved remarkable perform...
research
02/16/2015

Invariant backpropagation: how to train a transformation-invariant neural network

In many classification problems a classifier should be robust to small v...
research
12/15/2019

What Else Can Fool Deep Learning? Addressing Color Constancy Errors on Deep Neural Network Performance

There is active research targeting local image manipulations that can fo...

Please sign up or login with your details

Forgot password? Click here to reset