Learning Invariant Representation with Consistency and Diversity for Semi-supervised Source Hypothesis Transfer

by   Xiaodong Wang, et al.

Semi-supervised domain adaptation (SSDA) aims to solve tasks in target domain by utilizing transferable information learned from the available source domain and a few labeled target data. However, source data is not always accessible in practical scenarios, which restricts the application of SSDA in real world circumstances. In this paper, we propose a novel task named Semi-supervised Source Hypothesis Transfer (SSHT), which performs domain adaptation based on source trained model, to generalize well in target domain with a few supervisions. In SSHT, we are facing two challenges: (1) The insufficient labeled target data may result in target features near the decision boundary, with the increased risk of mis-classification; (2) The data are usually imbalanced in source domain, so the model trained with these data is biased. The biased model is prone to categorize samples of minority categories into majority ones, resulting in low prediction diversity. To tackle the above issues, we propose Consistency and Diversity Learning (CDL), a simple but effective framework for SSHT by facilitating prediction consistency between two randomly augmented unlabeled data and maintaining the prediction diversity when adapting model to target domain. Encouraging consistency regularization brings difficulty to memorize the few labeled target data and thus enhances the generalization ability of the learned model. We further integrate Batch Nuclear-norm Maximization into our method to enhance the discriminability and diversity. Experimental results show that our method outperforms existing SSDA methods and unsupervised model adaptation methods on DomainNet, Office-Home and Office-31 datasets. The code is available at https://github.com/Wang-xd1899/SSHT.


Learning Domain-invariant Graph for Adaptive Semi-supervised Domain Adaptation with Few Labeled Source Samples

Domain adaptation aims to generalize a model from a source domain to tac...

Diversity-Based Generalization for Neural Unsupervised Text Classification under Domain Shift

Domain adaptation approaches seek to learn from a source domain and gene...

Towards Discriminability and Diversity: Batch Nuclear-norm Maximization under Label Insufficient Situations

The learning of the deep networks largely relies on the data with human-...

Fast Batch Nuclear-norm Maximization and Minimization for Robust Domain Adaptation

Due to the domain discrepancy in visual domain adaptation, the performan...

Universal Semi-supervised Model Adaptation via Collaborative Consistency Training

In this paper, we introduce a realistic and challenging domain adaptatio...

Relaxed Conditional Image Transfer for Semi-supervised Domain Adaptation

Semi-supervised domain adaptation (SSDA), which aims to learn models in ...

Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation

Generating unlabeled data has been recently shown to help address the fe...

Please sign up or login with your details

Forgot password? Click here to reset