SoK: Certified Robustness for Deep Neural Networks

09/09/2020
by   Linyi Li, et al.
13

Great advancement in deep neural networks (DNNs) has led to state-of-the-art performance on a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to adversarial attacks, which have brought great concerns when deploying these models to safety-critical applications such as autonomous driving. Different defense approaches have been proposed against adversarial attacks, including: 1) empirical defenses, which can be adaptively attacked again without providing robustness certification; and 2) certifiably robust approaches, which consist of robustness verification providing the lower bound of robust accuracy against any attacks under certain conditions and corresponding robust training approaches. In this paper, we focus on these certifiably robust approaches and provide the first work to perform large-scale systematic analysis of different robustness verification and training approaches. In particular, we 1) provide a taxonomy for the robustness verification and training approaches, as well as discuss the detailed methodologies for representative algorithms, 2) reveal the fundamental connections among these approaches, 3) discuss current research progresses, theoretical barriers, main challenges, and several promising future directions for certified defenses for DNNs, and 4) provide an open-sourced unified platform to evaluate 20+ representative verification and corresponding robust training approaches on a wide range of DNNs.

READ FULL TEXT

page 2

page 13

page 14

page 15

page 16

page 18

page 20

page 21

research
03/02/2020

Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study

Deep neural networks (DNNs) have achieved significant performance in var...
research
03/19/2020

RAB: Provable Robustness Against Backdoor Attacks

Recent studies have shown that deep neural networks (DNNs) are vulnerabl...
research
10/23/2019

A Useful Taxonomy for Adversarial Robustness of Neural Networks

Adversarial attacks and defenses are currently active areas of research ...
research
11/27/2019

Survey of Attacks and Defenses on Edge-Deployed Neural Networks

Deep Neural Network (DNN) workloads are quickly moving from datacenters ...
research
07/15/2021

Adversarial Attacks on Multi-task Visual Perception for Autonomous Driving

Deep neural networks (DNNs) have accomplished impressive success in vari...
research
09/12/2022

CARE: Certifiably Robust Learning with Reasoning via Variational Inference

Despite great recent advances achieved by deep neural networks (DNNs), t...
research
03/22/2021

Fast Approximate Spectral Normalization for Robust Deep Neural Networks

Deep neural networks (DNNs) play an important role in machine learning d...

Please sign up or login with your details

Forgot password? Click here to reset