Distilling Localization for Self-Supervised Representation Learning

04/14/2020
by   Nanxuan Zhao, et al.
9

For high-level visual recognition, self-supervised learning defines and makes use of proxy tasks such as colorization and visual tracking to learn a semantic representation useful for distinguishing objects. In this paper, through visualizing and diagnosing classification errors, we observe that current self-supervised models are ineffective at localizing the foreground object, limiting their ability to extract discriminative high-level features. To address this problem, we propose a data-driven approach for learning invariance to backgrounds. It first estimates foreground saliency in images and then creates augmentations by copy-and-pasting the foreground onto a variety of backgrounds. The learning follows an instance discrimination approach which encourages the features of augmentations from the same image to be similar. In this way, the representation is trained to disregard background content and focus on the foreground. We study a variety of saliency estimation methods, and find that most methods lead to improvements for self-supervised learning. With this approach, strong performance is achieved for self-supervised learning on ImageNet classification, and also for transfer learning to object detection on PASCAL VOC 2007.

READ FULL TEXT

page 5

page 15

page 16

page 17

page 18

page 19

page 20

page 21

research
05/10/2022

CoDo: Contrastive Learning with Downstream Background Invariance for Detection

The prior self-supervised learning researches mainly select image-level ...
research
11/03/2020

Learning Visual Representations for Transfer Learning by Suppressing Texture

Recent literature has shown that features obtained from supervised train...
research
03/03/2023

Learning Common Rationale to Improve Self-Supervised Representation for Fine-Grained Visual Recognition Problems

Self-supervised learning (SSL) strategies have demonstrated remarkable p...
research
07/22/2013

Visual saliency estimation by integrating features using multiple kernel learning

In the last few decades, significant achievements have been attained in ...
research
06/17/2021

A Random CNN Sees Objects: One Inductive Bias of CNN and Its Applications

This paper starts by revealing a surprising finding: without any learnin...
research
08/16/2022

Matching Multiple Perspectives for Efficient Representation Learning

Representation learning approaches typically rely on images of objects c...
research
11/06/2020

Self Supervised Learning for Object Localisation in 3D Tomographic Images

While a lot of work is dedicated to self-supervised learning, most of it...

Please sign up or login with your details

Forgot password? Click here to reset