ConStruct-VL: Data-Free Continual Structured VL Concepts Learning

11/17/2022
by   James Seale Smith, et al.
0

Recently, large-scale pre-trained Vision-and-Language (VL) foundation models have demonstrated remarkable capabilities in many zero-shot downstream tasks, achieving competitive results for recognizing objects defined by as little as short text prompts. However, it has also been shown that VL models are still brittle in Structured VL Concept (SVLC) reasoning, such as the ability to recognize object attributes, states, and inter-object relations. This leads to reasoning mistakes, which need to be corrected as they occur by teaching VL models the missing SVLC skills; often this must be done using private data where the issue was found, which naturally leads to a data-free continual (no task-id) VL learning setting. In this work, we introduce the first Continual Data-Free Structured VL Concepts Learning (ConStruct-VL) benchmark and show it is challenging for many existing data-free CL strategies. We, therefore, propose a data-free method comprised of a new approach of Adversarial Pseudo-Replay (APR) which generates adversarial reminders of past tasks from past task models. To use this method efficiently, we also propose a continual parameter-efficient Layered-LoRA (LaLo) neural architecture allowing no-memory-cost access to all past models at train time. We show this approach outperforms all data-free methods by as much as  7 levels of experience-replay (prohibitive for applications where data-privacy must be preserved).

READ FULL TEXT

page 1

page 2

page 5

research
03/12/2023

Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models

Continual learning (CL) can help pre-trained vision-language models effi...
research
11/21/2022

Teaching Structured Vision Language Concepts to Vision Language Models

Vision and Language (VL) models have demonstrated remarkable zero-shot p...
research
04/12/2023

Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA

Recent works demonstrate a remarkable ability to customize text-to-image...
research
10/31/2022

Generative Negative Text Replay for Continual Vision-Language Pretraining

Vision-language pre-training (VLP) has attracted increasing attention re...
research
06/10/2019

Generative Continual Concept Learning

After learning a concept, humans are also able to continually generalize...
research
03/19/2021

Online Lifelong Generalized Zero-Shot Learning

Methods proposed in the literature for zero-shot learning (ZSL) are typi...

Please sign up or login with your details

Forgot password? Click here to reset