A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses

09/14/2020
by   Ambar Pal, et al.
0

Research in adversarial learning follows a cat and mouse game between attackers and defenders where attacks are proposed, they are mitigated by new defenses, and subsequently new attacks are proposed that break earlier defenses, and so on. However, it has remained unclear as to whether there are conditions under which no better attacks or defenses can be proposed. In this paper, we propose a game-theoretic framework for studying attacks and defenses which exist in equilibrium. Under a locally linear decision boundary model for the underlying binary classifier, we prove that the Fast Gradient Method attack and the Randomized Smoothing defense form a Nash Equilibrium. We then show how this equilibrium defense can be approximated given finitely many samples from a data-generating distribution, and derive a generalization bound for the performance of our approximation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/26/2020

Randomization matters. How to defend against strong adversarial attacks

Is there a classifier that ensures optimal robustness against all advers...
research
11/26/2022

Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning

Recent advances in adversarial machine learning have shown that defenses...
research
05/26/2021

The Anatomy of Corner 3s in the NBA: What makes them efficient, how are they generated and how can defenses respond?

Modern basketball is all about creating efficient shots, that is, shots ...
research
11/02/2018

Stronger Data Poisoning Attacks Break Data Sanitization Defenses

Machine learning models trained on data from the outside world can be co...
research
09/30/2019

Defense in Depth: The Basics of Blockade and Delay

Given that individual defenses are rarely sufficient, defense-in-depth i...
research
05/04/2019

When Attackers Meet AI: Learning-empowered Attacks in Cooperative Spectrum Sensing

Defense strategies have been well studied to combat Byzantine attacks th...
research
09/18/2020

A Framework of Randomized Selection Based Certified Defenses Against Data Poisoning Attacks

Neural network classifiers are vulnerable to data poisoning attacks, as ...

Please sign up or login with your details

Forgot password? Click here to reset