Adversarial Attacks and Defences: A Survey

by   Anirban Chakraborty, et al.

Deep learning has emerged as a strong and efficient framework that can be applied to a broad spectrum of complex learning problems which were difficult to solve using the traditional machine learning techniques in the past. In the last few years, deep learning has advanced radically in such a way that it can surpass human-level performance on a number of tasks. As a consequence, deep learning is being extensively used in most of the recent day-to-day applications. However, security of deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify the output. In recent times, different types of adversaries based on their threat model leverage these vulnerabilities to compromise a deep learning system where adversaries have high incentives. Hence, it is extremely important to provide robustness to deep learning algorithms against these adversaries. However, there are only a few strong countermeasures which can be used in all types of attack scenarios to design a robust deep learning system. In this paper, we attempt to provide a detailed discussion on different types of adversarial attacks with various threat models and also elaborate the efficiency and challenges of recent countermeasures against them.


page 7

page 11

page 13

page 25


Adversarial Examples in Deep Learning: Characterization and Divergence

The burgeoning success of deep learning has raised the security and priv...

Resisting Adversarial Attacks in Deep Neural Networks using Diverse Decision Boundaries

The security of deep learning (DL) systems is an extremely important fie...

Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks

Deep learning has been rapidly employed in many applications revolutioni...

Efficient Training of Robust Decision Trees Against Adversarial Examples

In the present day we use machine learning for sensitive tasks that requ...

Obliviousness Makes Poisoning Adversaries Weaker

Poisoning attacks have emerged as a significant security threat to machi...

On Deep Learning in Password Guessing, a Survey

The security of passwords is dependent on a thorough understanding of th...

Defending Tor from Network Adversaries: A Case Study of Network Path Prediction

The Tor anonymity network has been shown vulnerable to traffic analysis ...

Please sign up or login with your details

Forgot password? Click here to reset