MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental Learning

by   Julien Nicolas, et al.

Despite the recent progress in incremental learning, addressing catastrophic forgetting under distributional drift is still an open and important problem. Indeed, while state-of-the-art domain incremental learning (DIL) methods perform satisfactorily within known domains, their performance largely degrades in the presence of novel domains. This limitation hampers their generalizability, and restricts their scalability to more realistic settings where train and test data are drawn from different distributions. To address these limitations, we present a novel DIL approach based on a mixture of prompt-tuned CLIP models (MoP-CLIP), which generalizes the paradigm of S-Prompting to handle both in-distribution and out-of-distribution data at inference. In particular, at the training stage we model the features distribution of every class in each domain, learning individual text and visual prompts to adapt to a given domain. At inference, the learned distributions allow us to identify whether a given test sample belongs to a known domain, selecting the correct prompt for the classification task, or from an unseen domain, leveraging a mixture of the prompt-tuned CLIP models. Our empirical evaluation reveals the poor performance of existing DIL methods under domain shift, and suggests that the proposed MoP-CLIP performs competitively in the standard DIL settings while outperforming state-of-the-art methods in OOD scenarios. These results demonstrate the superiority of MoP-CLIP, offering a robust and general solution to the problem of domain incremental learning.


page 3

page 4


Forget Less, Count Better: A Domain-Incremental Self-Distillation Learning Benchmark for Lifelong Crowd Counting

Crowd Counting has important applications in public safety and pandemic ...

Incremental Prototype Prompt-tuning with Pre-trained Representation for Class Incremental Learning

Class incremental learning has attracted much attention, but most existi...

VisDA-2021 Competition Universal Domain Adaptation to Improve Performance on Out-of-Distribution Data

Progress in machine learning is typically measured by training and testi...

TKIL: Tangent Kernel Approach for Class Balanced Incremental Learning

When learning new tasks in a sequential manner, deep neural networks ten...

Principles of Forgetting in Domain-Incremental Semantic Segmentation in Adverse Weather Conditions

Deep neural networks for scene perception in automated vehicles achieve ...

Finding lost DG: Explaining domain generalization via model complexity

The domain generalization (DG) problem setting challenges a model traine...

MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition

Traditional Multilingual Text Recognition (MLTR) usually targets a fixed...

Please sign up or login with your details

Forgot password? Click here to reset