A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness

03/01/2021
by   Jacob Abernethy, et al.
0

Alongside the well-publicized accomplishments of deep neural networks there has emerged an apparent bug in their success on tasks such as object recognition: with deep models trained using vanilla methods, input images can be slightly corrupted in order to modify output predictions, even when these corruptions are practically invisible. This apparent lack of robustness has led researchers to propose methods that can help to prevent an adversary from having such capabilities. The state-of-the-art approaches have incorporated the robustness requirement into the loss function, and the training process involves taking stochastic gradient descent steps not using original inputs but on adversarially-corrupted ones. In this paper we propose a multiclass boosting framework to ensure adversarial robustness. Boosting algorithms are generally well-suited for adversarial scenarios, as they were classically designed to satisfy a minimax guarantee. We provide a theoretical foundation for this methodology and describe conditions under which robustness can be achieved given a weak training oracle. We show empirically that adversarially-robust multiclass boosting not only outperforms the state-of-the-art methods, it does so at a fraction of the training time.

READ FULL TEXT
research
01/30/2022

Improving Corruption and Adversarial Robustness by Enhancing Weak Subnets

Deep neural networks have achieved great success in many computer vision...
research
03/24/2019

A Formalization of Robustness for Deep Neural Networks

Deep neural networks have been shown to lack robustness to small input p...
research
06/10/2023

Boosting Adversarial Robustness using Feature Level Stochastic Smoothing

Advances in adversarial defenses have led to a significant improvement i...
research
08/27/2020

Adversarially Robust Learning via Entropic Regularization

In this paper we propose a new family of algorithms for training adversa...
research
06/20/2017

SPLBoost: An Improved Robust Boosting Algorithm Based on Self-paced Learning

It is known that Boosting can be interpreted as a gradient descent techn...
research
08/13/2020

Adversarial Training and Provable Robustness: A Tale of Two Objectives

We propose a principled framework that combines adversarial training and...
research
04/01/2020

Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes

As humans, we inherently perceive images based on their predominant feat...

Please sign up or login with your details

Forgot password? Click here to reset