CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations

09/30/2021
by   Mohammadreza Zolfaghari, et al.
1

Contrastive learning allows us to flexibly define powerful losses by contrasting positive pairs from sets of negative samples. Recently, the principle has also been used to learn cross-modal embeddings for video and text, yet without exploiting its full potential. In particular, previous losses do not take the intra-modality similarities into account, which leads to inefficient embeddings, as the same content is mapped to multiple points in the embedding space. With CrossCLR, we present a contrastive loss that fixes this issue. Moreover, we define sets of highly related samples in terms of their input embeddings and exclude them from the negative samples to avoid issues with false negatives. We show that these principles consistently improve the quality of the learned embeddings. The joint embeddings learned with CrossCLR extend the state of the art in video-text retrieval on Youcook2 and LSMDC datasets and in video captioning on Youcook2 dataset by a large margin. We also demonstrate the generality of the concept by learning improved joint embeddings for other pairs of modalities.

READ FULL TEXT

page 7

page 8

page 11

page 12

research
08/09/2019

Fine-Grained Action Retrieval Through Multiple Parts-of-Speech Embeddings

We address the problem of cross-modal fine-grained action retrieval betw...
research
05/07/2020

COBRA: Contrastive Bi-Modal Representation Algorithm

There are a wide range of applications that involve multi-modal data, su...
research
03/09/2023

Improving Video Retrieval by Adaptive Margin

Video retrieval is becoming increasingly important owing to the rapid em...
research
10/06/2020

Support-set bottlenecks for video-text representation learning

The dominant paradigm for learning video-text representations – noise co...
research
10/09/2022

ConTra: (Con)text (Tra)nsformer for Cross-Modal Video Retrieval

In this paper, we re-examine the task of cross-modal clip-sentence retri...
research
11/10/2021

SwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval

We tackle the cross-modal retrieval problem, where the training is only ...
research
04/25/2023

Sample-Specific Debiasing for Better Image-Text Models

Self-supervised representation learning on image-text data facilitates c...

Please sign up or login with your details

Forgot password? Click here to reset