False Memory Formation in Continual Learners Through Imperceptible Backdoor Trigger

02/09/2022
by   Muhammad Umer, et al.
0

In this brief, we show that sequentially learning new information presented to a continual (incremental) learning model introduces new security risks: an intelligent adversary can introduce small amount of misinformation to the model during training to cause deliberate forgetting of a specific task or class at test time, thus creating "false memory" about that task. We demonstrate such an adversary's ability to assume control of the model by injecting "backdoor" attack samples to commonly used generative replay and regularization based continual learning approaches using continual learning benchmark variants of MNIST, as well as the more challenging SVHN and CIFAR 10 datasets. Perhaps most damaging, we show this vulnerability to be very acute and exceptionally effective: the backdoor pattern in our attack model can be imperceptible to human eye, can be provided at any point in time, can be added into the training data of even a single possibly unrelated task and can be achieved with as few as just 1% of total training dataset of a single task.

READ FULL TEXT
research
02/16/2021

Adversarial Targeted Forgetting in Regularization and Generative Based Continual Learning Models

Continual (or "incremental") learning approaches are employed when addit...
research
02/17/2020

Targeted Forgetting and False Memory Formation in Continual Learners through Adversarial Backdoor Attacks

Artificial neural networks are well-known to be susceptible to catastrop...
research
11/29/2022

Training Time Adversarial Attack Aiming the Vulnerability of Continual Learning

Generally, regularization-based continual learning models limit access t...
research
05/28/2023

Backdoor Attacks Against Incremental Learners: An Empirical Evaluation Study

Large amounts of incremental learning algorithms have been proposed to a...
research
06/12/2020

Move-to-Data: A new Continual Learning approach with Deep CNNs, Application for image-class recognition

In many real-life tasks of application of supervised learning approaches...
research
08/17/2023

A Fusion of Variational Distribution Priors and Saliency Map Replay for Continual 3D Reconstruction

Single-image 3D reconstruction is a research challenge focused on predic...
research
03/30/2023

Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling

In this work, we present a data poisoning attack that confounds machine ...

Please sign up or login with your details

Forgot password? Click here to reset