Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation

09/13/2022
by   Maksym Yatsura, et al.
5

Adversarial patch attacks are an emerging security threat for real world deep learning applications. We present Demasked Smoothing, the first approach (up to our knowledge) to certify the robustness of semantic segmentation models against this threat model. Previous work on certifiably defending against patch attacks has mostly focused on image classification task and often required changes in the model architecture and additional training which is undesirable and computationally expensive. In Demasked Smoothing, any segmentation model can be applied without particular training, fine-tuning, or restriction of the architecture. Using different masking strategies, Demasked Smoothing can be applied both for certified detection and certified recovery. In extensive experiments we show that Demasked Smoothing can on average certify 64 pixel predictions for a 1 patch for the recovery task on the ADE20K dataset.

READ FULL TEXT

page 17

page 20

page 21

page 22

page 23

page 24

page 25

page 29

research
01/05/2022

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

The existence of real-world adversarial examples (commonly in the form o...
research
06/22/2023

Robust Semantic Segmentation: Strong Adversarial Attacks and Fast Training of Robust Models

While a large amount of work has focused on designing adversarial attack...
research
05/21/2022

On the Feasibility and Generality of Patch-based Adversarial Attacks on Semantic Segmentation Problems

Deep neural networks were applied with success in a myriad of applicatio...
research
02/25/2020

(De)Randomized Smoothing for Certifiable Defense against Patch Attacks

Patch adversarial attacks on images, in which the attacker can distort p...
research
02/22/2022

On the Effectiveness of Adversarial Training against Backdoor Attacks

DNNs' demand for massive data forces practitioners to collect data from ...
research
08/03/2021

Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation

In this paper, we tackle the detection of out-of-distribution (OOD) obje...
research
06/10/2020

Scalable Backdoor Detection in Neural Networks

Recently, it has been shown that deep learning models are vulnerable to ...

Please sign up or login with your details

Forgot password? Click here to reset