A Primer on Multi-Neuron Relaxation-based Adversarial Robustness Certification

06/06/2021
by   Kevin Roth, et al.
0

The existence of adversarial examples poses a real danger when deep neural networks are deployed in the real world. The go-to strategy to quantify this vulnerability is to evaluate the model against specific attack algorithms. This approach is however inherently limited, as it says little about the robustness of the model against more powerful attacks not included in the evaluation. We develop a unified mathematical framework to describe relaxation-based robustness certification methods, which go beyond adversary-specific robustness evaluation and instead provide provable robustness guarantees against attacks by any adversary. We discuss the fundamental limitations posed by single-neuron relaxations and show how the recent “k-ReLU” multi-neuron relaxation framework of Singh et al. (2019) obtains tighter correlation-aware activation bounds by leveraging additional relational constraints among groups of neurons. Specifically, we show how additional pre-activation bounds can be mapped to corresponding post-activation bounds and how they can in turn be used to obtain tighter robustness certificates. We also present an intuitive way to visualize different relaxation-based certification methods. By approximating multiple non-linearities jointly instead of separately, the k-ReLU method is able to bypass the convex barrier imposed by single neuron relaxations.

READ FULL TEXT
research
11/30/2022

Overcoming the Convex Relaxation Barrier for Neural Network Verification via Nonconvex Low-Rank Semidefinite Relaxations

To rigorously certify the robustness of neural networks to adversarial p...
research
06/24/2020

The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification

We improve the effectiveness of propagation- and linear-optimization-bas...
research
02/22/2020

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

Convex relaxations are effective for training and certifying neural netw...
research
06/11/2020

On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples

The robustness of a neural network to adversarial examples can be provab...
research
03/20/2020

One Neuron to Fool Them All

Despite vast research in adversarial examples, the root causes of model ...
research
02/23/2019

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Verification of neural networks enables us to gauge their robustness aga...
research
11/02/2018

Semidefinite relaxations for certifying robustness to adversarial examples

Despite their impressive performance on diverse tasks, neural networks f...

Please sign up or login with your details

Forgot password? Click here to reset