Unsupervised Multi-Class Domain Adaptation: Theory, Algorithms, and Practice

02/20/2020
by   Yabin Zhang, et al.
15

In this paper, we study the formalism of unsupervised multi-class domain adaptation (multi-class UDA), which underlies some recent algorithms whose learning objectives are only motivated empirically. A Multi-Class Scoring Disagreement (MCSD) divergence is presented by aggregating the absolute margin violations in multi-class classification; the proposed MCSD is able to fully characterize the relations between any pair of multi-class scoring hypotheses. By using MCSD as a measure of domain distance, we develop a new domain adaptation bound for multi-class UDA as well as its data-dependent, probably approximately correct bound, which naturally suggest adversarial learning objectives to align conditional feature distributions across the source and target domains. Consequently, an algorithmic framework of Multi-class Domain-adversarial learning Networks (McDalNets) is developed, whose different instantiations via surrogate learning objectives either coincide with or resemble a few of recently popular methods, thus (partially) underscoring their practical effectiveness. Based on our same theory of multi-class UDA, we also introduce a new algorithm of Domain-Symmetric Networks (SymmNets), which is featured by a novel adversarial strategy of domain confusion and discrimination. SymmNets afford simple extensions that work equally well under the problem settings of either closed set, partial, or open set UDA. We conduct careful empirical studies to compare different algorithms of McDalNets and our newly introduced SymmNets. Experiments verify our theoretical analysis and show the efficacy of our proposed SymmNets. We make our implementation codes publicly available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset