Dropping Networks for Transfer Learning

04/23/2018
by   James O'Neill, et al.
0

In natural language understanding, many challenges require learning relationships between two sequences for various tasks such as similarity, relatedness, paraphrasing and question matching. Some of these challenges are inherently closer in nature, hence the knowledge acquired from one task to another is easier acquired and adapted. However, transferring all knowledge might be undesired and can lead to sub-optimal results due to negative transfer. Hence, this paper focuses on the transferability of both instances and parameters across natural language understanding tasks using an ensemble-based transfer learning method to circumvent such issues. The primary contribution of this paper is the combination of both Dropout and Bagging for improved transferability in neural networks, referred to as Dropping herein. Secondly, we present a straightforward yet novel approach to incorporating source Dropping Networks to a target task for few-shot learning that mitigates negative transfer. This is achieved by using a decaying parameter chosen according to the slope changes of a smoothed spline error curve at sub-intervals during training. We compare the approach over the hard parameter sharing, soft parameter sharing and single-task learning to compare its effectiveness. The aforementioned adjustment leads to improved transfer learning performance and comparable results to the current state of the art only using few instances from the target task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2019

DoubleTransfer at MEDIQA 2019: Multi-Source Transfer Learning for Natural Language Understanding in the Medical Domain

This paper describes our competing system to enter the MEDIQA-2019 compe...
research
11/12/2021

On Transferability of Prompt Tuning for Natural Language Understanding

Prompt tuning (PT) is a promising parameter-efficient method to utilize ...
research
11/05/2020

Language Model is All You Need: Natural Language Understanding as Question Answering

Different flavors of transfer learning have shown tremendous impact in a...
research
04/27/2023

π-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation

Foundation models have achieved great advances in multi-task learning wi...
research
07/05/2019

Attentive Multi-Task Deep Reinforcement Learning

Sharing knowledge between tasks is vital for efficient learning in a mul...
research
06/23/2021

Classifying Textual Data with Pre-trained Vision Models through Transfer Learning and Data Transformations

Knowledge is acquired by humans through experience, and no boundary is s...
research
11/02/2021

Adapting to the Long Tail: A Meta-Analysis of Transfer Learning Research for Language Understanding Tasks

Natural language understanding (NLU) has made massive progress driven by...

Please sign up or login with your details

Forgot password? Click here to reset