Learning Downstream Task by Selectively Capturing Complementary Knowledge from Multiple Self-supervisedly Learning Pretexts

04/11/2022
by   Quan Feng, et al.
0

Self-supervised learning (SSL), as a newly emerging unsupervised representation learning paradigm, generally follows a two-stage learning pipeline: 1) learning invariant and discriminative representations with auto-annotation pretext(s), then 2) transferring the representations to assist downstream task(s). Such two stages are usually implemented separately, making the learned representation learned agnostic to the downstream tasks. Currently, most works are devoted to exploring the first stage. Whereas, it is less studied on how to learn downstream tasks with limited labeled data using the already learned representations. Especially, it is crucial and challenging to selectively utilize the complementary representations from diverse pretexts for a downstream task. In this paper, we technically propose a novel solution by leveraging the attention mechanism to adaptively squeeze suitable representations for the tasks. Meanwhile, resorting to information theory, we theoretically prove that gathering representation from diverse pretexts is more effective than a single one. Extensive experiments validate that our scheme significantly exceeds current popular pretext-matching based methods in gathering knowledge and relieving negative transfer in downstream tasks.

READ FULL TEXT

page 1

page 8

research
11/22/2021

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks

Self-supervised learning is a powerful paradigm for representation learn...
research
08/03/2023

On the Transition from Neural Representation to Symbolic Knowledge

Bridging the huge disparity between neural and symbolic representation c...
research
03/30/2020

Laplacian Denoising Autoencoder

While deep neural networks have been shown to perform remarkably well in...
research
08/18/2023

Learning Representations on Logs for AIOps

AI for IT Operations (AIOps) is a powerful platform that Site Reliabilit...
research
06/04/2019

Information Competing Process for Learning Diversified Representations

Learning representations with diversified information remains an open pr...
research
05/12/2016

Learning the Curriculum with Bayesian Optimization for Task-Specific Word Representation Learning

We use Bayesian optimization to learn curricula for word representation ...
research
10/26/2021

Unbiased Graph Embedding with Biased Graph Observations

Graph embedding techniques have been increasingly employed in real-world...

Please sign up or login with your details

Forgot password? Click here to reset