On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

01/05/2022
by   Giulio Rossolini, et al.
54

The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks such as visual perception in autonomous driving. This paper presents an extensive evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches, including digital, simulated, and physical ones. A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels. Also, a novel attack strategy is presented to improve the Expectation Over Transformation method for placing a patch in the scene. Finally, a state-of-the-art method for detecting adversarial patch is first extended to cope with semantic segmentation models, then improved to obtain real-time performance, and eventually evaluated in real-world scenarios. Experimental results reveal that, even though the adversarial effect is visible with both digital and real-world attacks, its impact is often spatially confined to areas of the image around the patch. This opens to further questions about the spatial robustness of real-time semantic segmentation models.

READ FULL TEXT

page 1

page 5

page 8

page 10

page 13

research
08/13/2021

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

Deep learning and convolutional neural networks allow achieving impressi...
research
09/13/2022

Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation

Adversarial patch attacks are an emerging security threat for real world...
research
02/08/2021

Efficient Certified Defenses Against Patch Attacks on Image Classifiers

Adversarial patches pose a realistic threat model for physical world att...
research
02/10/2021

Enhancing Real-World Adversarial Patches with 3D Modeling Techniques

Although many studies have examined adversarial examples in the real wor...
research
06/09/2022

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

Adversarial examples represent a serious threat for deep neural networks...
research
10/24/2018

Automated Evaluation of Semantic Segmentation Robustness for Autonomous Driving

One of the fundamental challenges in the design of perception systems fo...
research
03/21/2023

Influencer Backdoor Attack on Semantic Segmentation

When a small number of poisoned samples are injected into the training d...

Please sign up or login with your details

Forgot password? Click here to reset