Adversarially Robust Deep Learning with Optimal-Transport-Regularized Divergences

09/07/2023
by   Jeremiah Birrell, et al.
0

We introduce the ARMOR_D methods as novel approaches to enhancing the adversarial robustness of deep learning models. These methods are based on a new class of optimal-transport-regularized divergences, constructed via an infimal convolution between an information divergence and an optimal-transport (OT) cost. We use these as tools to enhance adversarial robustness by maximizing the expected loss over a neighborhood of distributions, a technique known as distributionally robust optimization. Viewed as a tool for constructing adversarial samples, our method allows samples to be both transported, according to the OT cost, and re-weighted, according to the information divergence. We demonstrate the effectiveness of our method on malware detection and image recognition applications and find that, to our knowledge, it outperforms existing methods at enhancing the robustness against adversarial attacks. ARMOR_D yields the robustified accuracy of 98.29% against FGSM and 98.18% against PGD^40 on the MNIST dataset, reducing the error rate by more than 19.7% and 37.2% respectively compared to prior methods. Similarly, in malware detection, a discrete (binary) data domain, ARMOR_D improves the robustified accuracy under rFGSM^50 attack compared to the previous best-performing adversarial training methods by 37.0% while lowering false negative and false positive rates by 51.1% and 57.53%, respectively.

READ FULL TEXT
research
08/07/2023

Unsupervised Adversarial Detection without Extra Model: Training Loss Should Change

Adversarial robustness poses a critical challenge in the deployment of d...
research
05/22/2023

FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign

Malware detection models based on deep learning have been widely used, b...
research
03/21/2023

OTJR: Optimal Transport Meets Optimal Jacobian Regularization for Adversarial Robustness

Deep neural networks are widely recognized as being vulnerable to advers...
research
02/05/2021

Optimal Transport as a Defense Against Adversarial Attacks

Deep learning classifiers are now known to have flaws in the representat...
research
08/10/2023

Unifying Distributionally Robust Optimization via Optimal Transport Theory

In the past few years, there has been considerable interest in two promi...
research
12/05/2019

Adversarial Risk via Optimal Transport and Optimal Couplings

The accuracy of modern machine learning algorithms deteriorates severely...
research
08/26/2022

Lower Difficulty and Better Robustness: A Bregman Divergence Perspective for Adversarial Training

In this paper, we investigate on improving the adversarial robustness ob...

Please sign up or login with your details

Forgot password? Click here to reset