Backdoor Attacks on Time Series: A Generative Approach

11/15/2022
by   Yujing Jiang, et al.
0

Backdoor attacks have emerged as one of the major security threats to deep learning models as they can easily control the model's test-time predictions by pre-injecting a backdoor trigger into the model at training time. While backdoor attacks have been extensively studied on images, few works have investigated the threat of backdoor attacks on time series data. To fill this gap, in this paper we present a novel generative approach for time series backdoor attacks against deep learning based time series classifiers. Backdoor attacks have two main goals: high stealthiness and high attack success rate. We find that, compared to images, it can be more challenging to achieve the two goals on time series. This is because time series have fewer input dimensions and lower degrees of freedom, making it hard to achieve a high attack success rate without compromising stealthiness. Our generative approach addresses this challenge by generating trigger patterns that are as realistic as real-time series patterns while achieving a high attack success rate without causing a significant drop in clean accuracy. We also show that our proposed attack is resistant to potential backdoor defenses. Furthermore, we propose a novel universal generator that can poison any type of time series with a single generator that allows universal attacks without the need to fine-tune the generative model for new time series datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2021

Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series

Deep learning based models are vulnerable to adversarial attacks. These ...
research
02/09/2023

Imperceptible Sample-Specific Backdoor to DNN with Denoising Autoencoder

The backdoor attack poses a new security threat to deep neural networks....
research
07/19/2023

Sig-Splines: universal approximation and convex calibration of time series generative models

We propose a novel generative model for multivariate discrete-time time ...
research
09/02/2022

Universal Fourier Attack for Time Series

A wide variety of adversarial attacks have been proposed and explored us...
research
05/25/2023

IMBERT: Making BERT Immune to Insertion-based Backdoor Attacks

Backdoor attacks are an insidious security threat against machine learni...
research
01/27/2023

Targeted Attacks on Timeseries Forecasting

Real-world deep learning models developed for Time Series Forecasting ar...
research
10/06/2021

Generative Optimization Networks for Memory Efficient Data Generation

In standard generative deep learning models, such as autoencoders or GAN...

Please sign up or login with your details

Forgot password? Click here to reset