Online Lifelong Generalized Zero-Shot Learning

03/19/2021
by   Chandan Gautam, et al.
12

Methods proposed in the literature for zero-shot learning (ZSL) are typically suitable for offline learning and cannot continually learn from sequential streaming data. The sequential data comes in the form of tasks during training. Recently, a few attempts have been made to handle this issue and develop continual ZSL (CZSL) methods. However, these CZSL methods require clear task-boundary information between the tasks during training, which is not practically possible. This paper proposes a task-free (i.e., task-agnostic) CZSL method, which does not require any task information during continual learning. The proposed task-free CZSL method employs a variational autoencoder (VAE) for performing ZSL. To develop the CZSL method, we combine the concept of experience replay with knowledge distillation and regularization. Here, knowledge distillation is performed using the training sample's dark knowledge, which essentially helps overcome the catastrophic forgetting issue. Further, it is enabled for task-free learning using short-term memory. Finally, a classifier is trained on the synthetic features generated at the latent space of the VAE. Moreover, the experiments are conducted in a challenging and practical ZSL setup, i.e., generalized ZSL (GZSL). These experiments are conducted for two kinds of single-head continual learning settings: (i) mild setting-: task-boundary is known only during training but not during testing; (ii) strict setting-: task-boundary is not known at training, as well as testing. Experimental results on five benchmark datasets exhibit the validity of the approach for CZSL.

READ FULL TEXT
research
01/22/2021

Generative Replay-based Continual Zero-Shot Learning

Zero-shot learning is a new paradigm to classify objects from classes th...
research
11/17/2020

Generalized Continual Zero-Shot Learning

Recently, zero-shot learning (ZSL) emerged as an exciting topic and attr...
research
02/07/2021

Adversarial Training of Variational Auto-encoders for Continual Zero-shot Learning

Most of the existing artificial neural networks(ANNs) fail to learn cont...
research
09/12/2022

Online Continual Learning via the Meta-learning Update with Multi-scale Knowledge Distillation and Data Augmentation

Continual learning aims to rapidly and continually learn the current tas...
research
12/10/2018

Task-Free Continual Learning

Methods proposed in the literature towards continual deep learning typic...
research
03/28/2023

Projected Latent Distillation for Data-Agnostic Consolidation in Distributed Continual Learning

Distributed learning on the edge often comprises self-centered devices (...
research
11/17/2022

ConStruct-VL: Data-Free Continual Structured VL Concepts Learning

Recently, large-scale pre-trained Vision-and-Language (VL) foundation mo...

Please sign up or login with your details

Forgot password? Click here to reset