Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks

04/12/2019
by   David J. Miller, et al.
0

With the wide deployment of machine learning (ML) based systems for a variety of applications including medical, military, automotive, genomic, as well as multimedia and social networking, there is great potential for damage from adversarial learning (AL) attacks. In this paper, we provide a contemporary survey of AL, focused particularly on defenses against attacks on statistical classifiers. After introducing relevant terminology and the goals and range of possible knowledge of both attackers and defenders, we survey recent work on test-time evasion (TTE), data poisoning (DP), and reverse engineering (RE) attacks and particularly defenses against same. In so doing, we distinguish robust classification from anomaly detection (AD), unsupervised from supervised, and statistical hypothesis-based defenses from ones that do not have an explicit null (no attack) hypothesis; we identify the hyperparameters a particular method requires, its computational complexity, as well as the performance measures on which it was evaluated and the obtained quality. We then dig deeper, providing novel insights that challenge conventional AL wisdom and that target unresolved issues, including: 1) robust classification versus AD as a defense strategy; 2) the belief that attack success increases with attack strength, which ignores susceptibility to AD; 3) small perturbations for test-time evasion attacks: a fallacy or a requirement?; 4) validity of the universal assumption that a TTE attacker knows the ground-truth class for the example to be attacked; 5) black, grey, or white box attacks as the standard for defense evaluation; 6) susceptibility of query-based RE to an AD defense. We then present benchmark comparisons of several defenses against TTE, RE, and backdoor DP attacks on images. The paper concludes with a discussion of future work.

READ FULL TEXT
research
12/18/2017

When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time

A significant threat to the recent, wide deployment of machine learning-...
research
02/27/2023

Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators

It is becoming increasingly imperative to design robust ML defenses. How...
research
10/19/2020

Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification

Machine learning-based systems for malware detection operate in a hostil...
research
01/27/2023

PECAN: A Deterministic Certified Defense Against Backdoor Attacks

Neural networks are vulnerable to backdoor poisoning attacks, where the ...
research
02/28/2022

Evaluating the Adversarial Robustness of Adaptive Test-time Defenses

Adaptive defenses that use test-time optimization promise to improve rob...
research
07/21/2020

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review

This work provides the community with a timely comprehensive review of b...
research
09/15/2020

Data Poisoning Attacks on Regression Learning and Corresponding Defenses

Adversarial data poisoning is an effective attack against machine learni...

Please sign up or login with your details

Forgot password? Click here to reset