Using Explanations to Guide Models

03/21/2023
by   Sukrut Rao, et al.
0

Deep neural networks are highly performant, but might base their decision on spurious or background features that co-occur with certain classes, which can hurt generalization. To mitigate this issue, the usage of 'model guidance' has gained popularity recently: for this, models are guided to be "right for the right reasons" by regularizing the models' explanations to highlight the right features. Experimental validation of these approaches has thus far however been limited to relatively simple and / or synthetic datasets. To gain a better understanding of which model-guiding approaches actually transfer to more challenging real-world datasets, in this work we conduct an in-depth evaluation across various loss functions, attribution methods, models, and 'guidance depths' on the PASCAL VOC 2007 and MS COCO 2014 datasets, and show that model guidance can sometimes even improve model performance. In this context, we further propose a novel energy loss, show its effectiveness in directing the model to focus on object features. We also show that these gains can be achieved even with a small fraction (e.g. 1 highlighting the cost effectiveness of this approach. Lastly, we show that this approach can also improve generalization under distribution shifts. Code will be made available.

READ FULL TEXT

page 2

page 13

page 14

page 15

page 16

page 20

page 21

page 24

research
07/05/2023

Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency

Ensuring the trustworthiness and interpretability of machine learning mo...
research
02/03/2022

Certifying Out-of-Domain Generalization for Blackbox Functions

Certifying the robustness of model performance under bounded data distri...
research
03/11/2023

Robust Learning from Explanations

Machine learning from explanations (MLX) is an approach to learning that...
research
11/21/2022

Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations

Applying traditional post-hoc attribution methods to segmentation or obj...
research
02/08/2023

Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten

The Right to Explanation and the Right to be Forgotten are two important...
research
03/10/2017

Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations

Neural networks are among the most accurate supervised learning methods ...
research
01/15/2020

Right for the Wrong Scientific Reasons: Revising Deep Networks by Interacting with their Explanations

Deep neural networks have shown excellent performances in many real-worl...

Please sign up or login with your details

Forgot password? Click here to reset