Masked Images Are Counterfactual Samples for Robust Fine-tuning

03/06/2023
by   Yao Xiao, et al.
0

Deep learning models are challenged by the distribution shift between the training data and test data. Recently, the large models pre-trained on diverse data demonstrate unprecedented robustness to various distribution shifts. However, fine-tuning on these models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness. Existing methods for tackling this trade-off do not explicitly address the OOD robustness problem. In this paper, based on causal analysis on the aforementioned problems, we propose a novel fine-tuning method, which use masked images as counterfactual samples that help improving the robustness of the fine-tuning model. Specifically, we mask either the semantics-related or semantics-unrelated patches of the images based on class activation map to break the spurious correlation, and refill the masked patches with patches from other images. The resulting counterfactual samples are used in feature-based distillation with the pre-trained model. Extensive experiments verify that regularizing the fine-tuning with the proposed masked images can achieve a better trade-off between ID and OOD, surpassing previous methods on the OOD performance. Our code will be publicly available.

READ FULL TEXT
research
05/22/2023

Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection

Out-of-distribution (OOD) detection is a critical task for reliable pred...
research
06/07/2023

Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations

This paper reexamines the research on out-of-distribution (OOD) robustne...
research
04/21/2023

Benchmarking Low-Shot Robustness to Natural Distribution Shifts

Robustness to natural distribution shifts has seen remarkable progress t...
research
05/20/2022

Mask-guided Vision Transformer (MG-ViT) for Few-Shot Learning

Learning with little data is challenging but often inevitable in various...
research
06/30/2021

The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning

Although machine learning models typically experience a drop in performa...
research
06/15/2022

READ: Aggregating Reconstruction Error into Out-of-distribution Detection

Detecting out-of-distribution (OOD) samples is crucial to the safe deplo...
research
07/03/2023

Surgical fine-tuning for Grape Bunch Segmentation under Visual Domain Shifts

Mobile robots will play a crucial role in the transition towards sustain...

Please sign up or login with your details

Forgot password? Click here to reset