Quantitatively Measuring and Contrastively Exploring Heterogeneity for Domain Generalization

by   Yunze Tong, et al.

Domain generalization (DG) is a prevalent problem in real-world applications, which aims to train well-generalized models for unseen target domains by utilizing several source domains. Since domain labels, i.e., which domain each data point is sampled from, naturally exist, most DG algorithms treat them as a kind of supervision information to improve the generalization performance. However, the original domain labels may not be the optimal supervision signal due to the lack of domain heterogeneity, i.e., the diversity among domains. For example, a sample in one domain may be closer to another domain, its original label thus can be the noise to disturb the generalization learning. Although some methods try to solve it by re-dividing domains and applying the newly generated dividing pattern, the pattern they choose may not be the most heterogeneous due to the lack of the metric for heterogeneity. In this paper, we point out that domain heterogeneity mainly lies in variant features under the invariant learning framework. With contrastive learning, we propose a learning potential-guided metric for domain heterogeneity by promoting learning variant features. Then we notice the differences between seeking variance-based heterogeneity and training invariance-based generalizable model. We thus propose a novel method called Heterogeneity-based Two-stage Contrastive Learning (HTCL) for the DG task. In the first stage, we generate the most heterogeneous dividing pattern with our contrastive metric. In the second stage, we employ an invariance-aimed contrastive learning by re-building pairs with the stable relation hinted by domains and classes, which better utilizes generated domain labels for generalization learning. Extensive experiments show HTCL better digs heterogeneity and yields great generalization performance.


page 1

page 2

page 3

page 4


Progressive Domain Expansion Network for Single Domain Generalization

Single domain generalization is a challenging case of model generalizati...

HMOE: Hypernetwork-based Mixture of Experts for Domain Generalization

Due to the domain shift, machine learning systems typically fail to gene...

Causality-based Dual-Contrastive Learning Framework for Domain Generalization

Domain Generalization (DG) is essentially a sub-branch of out-of-distrib...

Contrastive Domain Generalization via Logit Attribution Matching

Domain Generalization (DG) is an important open problem in machine learn...

From Labels to Priors in Capsule Endoscopy: A Prior Guided Approach for Improving Generalization with Few Labels

The lack of generalizability of deep learning approaches for the automat...

Representation Heterogeneity

Semantic Heterogeneity is conventionally understood as the existence of ...

Deep Spatial Domain Generalization

Spatial autocorrelation and spatial heterogeneity widely exist in spatia...

Please sign up or login with your details

Forgot password? Click here to reset