Retrieval Augmentation to Improve Robustness and Interpretability of Deep Neural Networks

02/25/2021
by   Rita Parada Ramos, et al.
7

Deep neural network models have achieved state-of-the-art results in various tasks related to vision and/or language. Despite the use of large training data, most models are trained by iterating over single input-output pairs, discarding the remaining examples for the current prediction. In this work, we actively exploit the training data to improve the robustness and interpretability of deep neural networks, using the information from nearest training examples to aid the prediction both during training and testing. Specifically, the proposed approach uses the target of the nearest input example to initialize the memory state of an LSTM model or to guide attention mechanisms. We apply this approach to image captioning and sentiment analysis, conducting experiments with both image and text retrieval. Results show the effectiveness of the proposed models for the two tasks, on the widely used Flickr8 and IMDB datasets, respectively. Our code is publicly available http://github.com/RitaRamo/retrieval-augmentation-nn.

READ FULL TEXT

page 3

page 5

page 6

research
02/24/2021

On the Impact of Interpretability Methods in Active Image Augmentation Method

Robustness is a significant constraint in machine learning models. The p...
research
09/10/2019

Improving the Interpretability of Neural Sentiment Classifiers via Data Augmentation

Recent progress of neural network models has achieved remarkable perform...
research
04/05/2023

Towards Self-Explainability of Deep Neural Networks with Heatmap Captioning and Large-Language Models

Heatmaps are widely used to interpret deep neural networks, particularly...
research
04/04/2016

Image Captioning with Deep Bidirectional LSTMs

This work presents an end-to-end trainable deep bidirectional LSTM (Long...
research
05/14/2019

Interpretable Deep Neural Networks for Patient Mortality Prediction: A Consensus-based Approach

Deep neural networks have achieved remarkable success in challenging tas...
research
05/15/2023

Smoothness and monotonicity constraints for neural networks using ICEnet

Deep neural networks have become an important tool for use in actuarial ...
research
12/30/2022

On the Interpretability of Attention Networks

Attention mechanisms form a core component of several successful deep le...

Please sign up or login with your details

Forgot password? Click here to reset