Mitigating the Impact of Adversarial Attacks in Very Deep Networks

12/08/2020
by   Mohammed Hassanin, et al.
0

Deep Neural Network (DNN) models have vulnerabilities related to security concerns, with attackers usually employing complex hacking techniques to expose their structures. Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models. They negatively impact the learning process, with no benefit to deeper networks, as they degrade a model's accuracy and convergence rates. In this paper, we propose an attack-agnostic-based defense method for mitigating their influence. In it, a Defensive Feature Layer (DFL) is integrated with a well-known DNN architecture which assists in neutralizing the effects of illegitimate perturbation samples in the feature space. To boost the robustness and trustworthiness of this method for correctly classifying attacked input samples, we regularize the hidden space of a trained model with a discriminative loss function called Polarized Contrastive Loss (PCL). It improves discrimination among samples in different classes and maintains the resemblance of those in the same class. Also, we integrate a DFL and PCL in a compact model for defending against data poisoning attacks. This method is trained and tested using the CIFAR-10 and MNIST datasets with data poisoning-enabled perturbation attacks, with the experimental results revealing its excellent performance compared with those of recent peer techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2021

Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?

Approximate computing is known for its effectiveness in improvising the ...
research
06/02/2022

FACM: Correct the Output of Deep Neural Network with Middle Layers Features against Adversarial Samples

In the strong adversarial attacks against deep neural network (DNN), the...
research
12/08/2020

A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models

Deep learning algorithms have been recently targeted by attackers due to...
research
02/26/2020

Adversarial Ranking Attack and Defense

Deep Neural Network (DNN) classifiers are vulnerable to adversarial atta...
research
01/29/2023

Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid

Deep Neural Networks have proven to be highly accurate at a variety of t...
research
01/21/2020

Generate High-Resolution Adversarial Samples by Identifying Effective Features

As the prevalence of deep learning in computer vision, adversarial sampl...
research
02/27/2019

Communication without Interception: Defense against Deep-Learning-based Modulation Detection

We consider a communication scenario, in which an intruder, employing a ...

Please sign up or login with your details

Forgot password? Click here to reset