XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-domain Mixup

by   Xingjian Li, et al.

Transferring knowledge from large source datasets is an effective way to fine-tune the deep neural networks of the target task with a small sample size. A great number of algorithms have been proposed to facilitate deep transfer learning, and these techniques could be generally categorized into two groups - Regularized Learning of the target task using models that have been pre-trained from source datasets, and Multitask Learning with both source and target datasets to train a shared backbone neural network. In this work, we aim to improve the multitask paradigm for deep transfer learning via Cross-domain Mixup (XMixup). While the existing multitask learning algorithms need to run backpropagation over both the source and target datasets and usually consume a higher gradient complexity, XMixup transfers the knowledge from source to target tasks more efficiently: for every class of the target task, XMixup selects the auxiliary samples from the source dataset and augments training samples via the simple mixup strategy. We evaluate XMixup over six real world transfer learning datasets. Experiment results show that XMixup improves the accuracy by 1.9 learning approaches, XMixup costs much less training time while still obtains higher accuracy.


page 1

page 2

page 3

page 4


Towards Accurate Knowledge Transfer via Target-awareness Representation Disentanglement

Fine-tuning deep neural networks pre-trained on large scale datasets is ...

Transfer Learning for Cross-Dataset Recognition: A Survey

This paper summarises and analyses the cross-dataset recognition transfe...

Transferring model structure in Bayesian transfer learning for Gaussian process regression

Bayesian transfer learning (BTL) is defined in this paper as the task of...

Model-based Transfer Learning for Automatic Optical Inspection based on domain discrepancy

Transfer learning is a promising method for AOI applications since it ca...

DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks

Transfer learning through fine-tuning a pre-trained neural network with ...

Auto-Transfer: Learning to Route Transferrable Representations

Knowledge transfer between heterogeneous source and target networks and ...

Unidirectional Thin Adapter for Efficient Adaptation of Deep Neural Networks

In this paper, we propose a new adapter network for adapting a pre-train...

Please sign up or login with your details

Forgot password? Click here to reset