Fine-tuning can cripple your foundation model; preserving features may be the solution

08/25/2023
by   Jishnu Mukhoti, et al.
0

Pre-trained foundation models, owing primarily to their enormous capacity and exposure to vast amount of training data scraped from the internet, enjoy the advantage of storing knowledge about plenty of real-world concepts. Such models are typically fine-tuned on downstream datasets to produce remarkable state-of-the-art performances. While various fine-tuning methods have been devised and are shown to be highly effective, we observe that a fine-tuned model's ability to recognize concepts on tasks different from the downstream one is reduced significantly compared to its pre-trained counterpart. This is clearly undesirable as a huge amount of time and money went into learning those very concepts in the first place. We call this undesirable phenomenon "concept forgetting" and via experiments show that most end-to-end fine-tuning approaches suffer heavily from this side effect. To this end, we also propose a rather simple fix to this problem by designing a method called LDIFS (short for ℓ_2 distance in feature space) that simply preserves the features of the original foundation model during fine-tuning. We show that LDIFS significantly reduces concept forgetting without having noticeable impact on the downstream task performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2023

Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models

Large pre-trained language models (PLMs) have demonstrated strong perfor...
research
11/14/2022

Supervised Fine-tuning Evaluation for Long-term Visual Place Recognition

In this paper, we present a comprehensive study on the utility of deep c...
research
02/09/2023

Offsite-Tuning: Transfer Learning without Full Model

Transfer learning is important for foundation models to adapt to downstr...
research
10/14/2022

Watermarking Pre-trained Language Models with Backdooring

Large pre-trained language models (PLMs) have proven to be a crucial com...
research
06/29/2016

Learning without Forgetting

When building a unified vision system or gradually adding new capabiliti...
research
03/20/2022

Fine-Tuning Graph Neural Networks via Graph Topology induced Optimal Transport

Recently, the pretrain-finetuning paradigm has attracted tons of attenti...
research
10/14/2022

Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence Embedding

Dataset bias has attracted increasing attention recently for its detrime...

Please sign up or login with your details

Forgot password? Click here to reset