Distribution-Aware Prompt Tuning for Vision-Language Models

09/06/2023
by   Eulrang Cho, et al.
0

Pre-trained vision-language models (VLMs) have shown impressive performance on various downstream tasks by utilizing knowledge learned from large data. In general, the performance of VLMs on target tasks can be further improved by prompt tuning, which adds context to the input image or text. By leveraging data from target tasks, various prompt-tuning methods have been studied in the literature. A key to prompt tuning is the feature space alignment between two modalities via learnable vectors with model parameters fixed. We observed that the alignment becomes more effective when embeddings of each modality are `well-arranged' in the latent space. Inspired by this observation, we proposed distribution-aware prompt tuning (DAPT) for vision-language models, which is simple yet effective. Specifically, the prompts are learned by maximizing inter-dispersion, the distance between classes, as well as minimizing the intra-dispersion measured by the distance between embeddings from the same class. Our extensive experiments on 11 benchmark datasets demonstrate that our method significantly improves generalizability. The code is available at https://github.com/mlvlab/DAPT.

READ FULL TEXT

page 1

page 6

page 9

page 12

research
11/18/2022

Task Residual for Tuning Vision-Language Models

Large-scale vision-language models (VLMs) pre-trained on billion-level d...
research
03/14/2023

Diversity-Aware Meta Visual Prompting

We present Diversity-Aware Meta Visual Prompting (DAM-VP), an efficient ...
research
08/22/2023

Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models

Pre-trained vision-language models, e.g., CLIP, working with manually de...
research
09/08/2023

Manifold-based Verbalizer Space Re-embedding for Tuning-free Prompt-based Classification

Prompt-based classification adapts tasks to a cloze question format util...
research
10/13/2022

Unified Vision and Language Prompt Learning

Prompt tuning, a parameter- and data-efficient transfer learning paradig...
research
09/14/2023

DePT: Decoupled Prompt Tuning

This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tu...
research
05/30/2023

Scalable Performance Analysis for Vision-Language Models

Joint vision-language models have shown great performance over a diverse...

Please sign up or login with your details

Forgot password? Click here to reset