TSFool: Crafting High-quality Adversarial Time Series through Multi-objective Optimization to Fool Recurrent Neural Network Classifiers

09/14/2022
by   Yanyun Wang, et al.
0

Deep neural network (DNN) classifiers are vulnerable to adversarial attacks. Although the existing gradient-based attacks have achieved good performance in feed-forward model and image recognition tasks, the extension for time series classification in the recurrent neural network (RNN) remains a dilemma, because the cyclical structure of RNN prevents direct model differentiation and the visual sensitivity to perturbations of time series data challenges the traditional local optimization objective to minimize perturbation. In this paper, an efficient and widely applicable approach called TSFool for crafting high-quality adversarial time series for the RNN classifier is proposed. We propose a novel global optimization objective named Camouflage Coefficient to consider how well the adversarial samples hide in class clusters, and accordingly redefine the high-quality adversarial attack as a multi-objective optimization problem. We also propose a new idea to use intervalized weighted finite automata (IWFA) to capture deeply embedded vulnerable samples having otherness between features and latent manifold to guide the approximation to the optimization solution. Experiments on 22 UCR datasets are conducted to confirm that TSFool is a widely effective, efficient and high-quality approach with 93.22 times speedup to existing methods.

READ FULL TEXT
research
01/09/2023

On the Susceptibility and Robustness of Time Series Models through Adversarial Attack and Defense

Under adversarial attacks, time series regression and classification are...
research
10/15/2019

Improving Robustness of time series classifier with Neural ODE guided gradient based data augmentation

Exploring adversarial attack vectors and studying their effects on machi...
research
07/13/2023

Multi-objective Evolutionary Search of Variable-length Composite Semantic Perturbations

Deep neural networks have proven to be vulnerable to adversarial attacks...
research
03/31/2020

Adversarial Attacks on Multivariate Time Series

Classification models for the multivariate time series have gained signi...
research
02/27/2019

Adversarial Attacks on Time Series

Time series classification models have been garnering significant import...
research
11/14/2018

Verification of Recurrent Neural Networks Through Rule Extraction

The verification problem for neural networks is verifying whether a neur...

Please sign up or login with your details

Forgot password? Click here to reset