Improved Visual Fine-tuning with Natural Language Supervision

04/04/2023
by   Junyang Wang, et al.
0

Fine-tuning a pre-trained model can leverage the semantic information from large-scale pre-training data and mitigate the over-fitting problem on downstream tasks with limited training examples. While the problem of catastrophic forgetting in backbone has been extensively studied, the potential bias existing in a pre-trained model due to the corresponding pre-training task and data, attracts less attention. In this work, we investigate this problem by demonstrating that the obtained classifier after fine-tuning will be close to that induced by the pre-trained model. To reduce the bias in the classifier effectively, we introduce a reference distribution obtained from a fixed text classifier, which can help regularize the learned vision classifier. The proposed method, Text Supervised fine-tuning (TeS), is evaluated with diverse pre-trained vision models including ResNet and ViT, and text encoders including BERT and CLIP, on 11 downstream tasks. The consistent improvement with a clear margin over distinct scenarios confirms the effectiveness of our proposal.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/28/2022

Pro-tuning: Unified Prompt Tuning for Vision Tasks

In computer vision, fine-tuning is the de-facto approach to leverage pre...
research
06/26/2023

Learning to Modulate pre-trained Models in RL

Reinforcement Learning (RL) has been successful in various domains like ...
research
10/11/2021

Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting

Although pre-trained language models, such as BERT, achieve state-of-art...
research
03/17/2022

POLARIS: A Geographic Pre-trained Model and its Applications in Baidu Maps

Pre-trained models (PTMs) have become a fundamental backbone for downstr...
research
12/28/2020

Syntax-Enhanced Pre-trained Model

We study the problem of leveraging the syntactic structure of text to en...
research
01/19/2023

Self Supervision Does Not Help Natural Language Supervision at Scale

Self supervision and natural language supervision have emerged as two ex...
research
10/06/2019

Transforming the output of GANs by fine-tuning them with features from different datasets

In this work we present a method for fine-tuning pre-trained GANs with f...

Please sign up or login with your details

Forgot password? Click here to reset