Self-training Avoids Using Spurious Features Under Domain Shift

06/17/2020
by   Yining Chen, et al.
0

In unsupervised domain adaptation, existing theory focuses on situations where the source and target domains are close. In practice, conditional entropy minimization and pseudo-labeling work even when the domain shifts are much larger than those analyzed by existing theory. We identify and analyze one particular setting where the domain shift can be large, but these algorithms provably work: certain spurious features correlate with the label in the source domain but are independent of the label in the target. Our analysis considers linear classification where the spurious features are Gaussian and the non-spurious features are a mixture of log-concave distributions. For this setting, we prove that entropy minimization on unlabeled target data will avoid using the spurious feature if initialized with a decently accurate source classifier, even though the objective is non-convex and contains multiple bad local minima using the spurious features. We verify our theory for spurious domain shift tasks on semi-synthetic Celeb-A and MNIST datasets. Our results suggest that practitioners collect and self-train on large, diverse datasets to reduce biases in classifiers even if labeling is impractical.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2020

Combating Domain Shift with Self-Taught Labeling

We present a novel method to combat domain shift when adapting classific...
research
06/30/2021

Multi-Source domain adaptation via supervised contrastive learning and confident consistency regularization

Multi-Source Unsupervised Domain Adaptation (multi-source UDA) aims to l...
research
12/21/2020

SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised Domain Adaptation

Many existing approaches for unsupervised domain adaptation (UDA) focus ...
research
12/07/2022

Reconciling a Centroid-Hypothesis Conflict in Source-Free Domain Adaptation

Source-free domain adaptation (SFDA) aims to transfer knowledge learned ...
research
10/12/2017

Self-Taught Support Vector Machine

In this paper, a new approach for classification of target task using li...
research
10/23/2020

Coping with Label Shift via Distributionally Robust Optimisation

The label shift problem refers to the supervised learning setting where ...

Please sign up or login with your details

Forgot password? Click here to reset