Three Towers: Flexible Contrastive Learning with Pretrained Image Models

05/26/2023
by   Jannik Kossen, et al.
0

We introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch, LiT (Zhai et al., 2022) has recently shown performance gains from using pretrained classifier embeddings. However, LiT directly replaces the image tower with the frozen embeddings, excluding any potential benefits of contrastively training the image tower. With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training. To achieve this, we introduce a third tower that contains the frozen pretrained embeddings, and we encourage alignment between this third tower and the main image-text towers. Empirically, 3T consistently improves over LiT and the CLIP-style from-scratch baseline for retrieval tasks. For classification, 3T reliably improves over the from-scratch baseline, and while it underperforms relative to LiT for JFT-pretrained models, it outperforms LiT for ImageNet-21k and Places365 pretraining.

READ FULL TEXT

page 22

page 23

research
06/09/2021

Sentence Embeddings using Supervised Contrastive Learning

Sentence embeddings encode sentences in fixed dense vectors and have pla...
research
10/20/2021

Contrastive Document Representation Learning with Graph Attention Networks

Recent progress in pretrained Transformer-based language models has show...
research
05/09/2023

Boosting Visual-Language Models by Exploiting Hard Samples

Large vision and language models, such as Contrastive Language-Image Pre...
research
06/05/2022

Towards Fast Adaptation of Pretrained Contrastive Models for Multi-channel Video-Language Retrieval

Multi-channel video-language retrieval require models to understand info...
research
08/24/2022

Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models

Self supervised contrastive learning based pretraining allows developmen...
research
02/17/2022

When, Why, and Which Pretrained GANs Are Useful?

The literature has proposed several methods to finetune pretrained GANs ...
research
06/27/2023

Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise?

For Pretrained Language Models (PLMs), their susceptibility to noise has...

Please sign up or login with your details

Forgot password? Click here to reset