A survey on Self Supervised learning approaches for improving Multimodal representation learning

10/20/2022
by   Naman Goyal, et al.
0

Recently self supervised learning has seen explosive growth and use in variety of machine learning tasks because of its ability to avoid the cost of annotating large-scale datasets. This paper gives an overview for best self supervised learning approaches for multimodal learning. The presented approaches have been aggregated by extensive study of the literature and tackle the application of self supervised learning in different ways. The approaches discussed are cross modal generation, cross modal pretraining, cyclic translation, and generating unimodal labels in self supervised fashion.

READ FULL TEXT
research
11/09/2018

Cross and Learn: Cross-Modal Self-Supervision

In this paper we present a self-supervised method for representation lea...
research
11/29/2022

Survey on Self-Supervised Multimodal Representation Learning and Foundation Models

Deep learning has been the subject of growing interest in recent years. ...
research
08/16/2022

Matching Multiple Perspectives for Efficient Representation Learning

Representation learning approaches typically rely on images of objects c...
research
01/21/2021

Learning rich touch representations through cross-modal self-supervision

The sense of touch is fundamental in several manipulation tasks, but rar...
research
09/18/2023

Self-supervised Multi-view Clustering in Computer Vision: A Survey

Multi-view clustering (MVC) has had significant implications in cross-mo...
research
08/20/2021

Self-supervised learning for joint SAR and multispectral land cover classification

Self-supervised learning techniques are gaining popularity due to their ...
research
05/12/2022

A Computational Acquisition Model for Multimodal Word Categorization

Recent advances in self-supervised modeling of text and images open new ...

Please sign up or login with your details

Forgot password? Click here to reset