fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating Weather Conditions on the Camera Lens of Autonomous Systems

by   Alberto Marchisio, et al.
NYU college
Politecnico di Torino
TU Wien

Recently, Deep Neural Networks (DNNs) have achieved remarkable performances in many applications, while several studies have enhanced their vulnerabilities to malicious attacks. In this paper, we emulate the effects of natural weather conditions to introduce plausible perturbations that mislead the DNNs. By observing the effects of such atmospheric perturbations on the camera lenses, we model the patterns to create different masks that fake the effects of rain, snow, and hail. Even though the perturbations introduced by our attacks are visible, their presence remains unnoticed due to their association with natural events, which can be especially catastrophic for fully-autonomous and unmanned vehicles. We test our proposed fakeWeather attacks on multiple Convolutional Neural Network and Capsule Network models, and report noticeable accuracy drops in the presence of such adversarial perturbations. Our work introduces a new security threat for DNNs, which is especially severe for safety-critical applications and autonomous systems.


page 1

page 2

page 3

page 4

page 6

page 7

page 8


Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory

Deep neural networks (DNNs) are vulnerable to subtle adversarial perturb...

Physical Adversarial Attack meets Computer Vision: A Decade Survey

Although Deep Neural Networks (DNNs) have achieved impressive results in...

Towards robust sensing for Autonomous Vehicles: An adversarial perspective

Autonomous Vehicles rely on accurate and robust sensor observations for ...

Security of Facial Forensics Models Against Adversarial Attacks

Deep neural networks (DNNs) have been used in forensics to identify fake...

Over-The-Air Adversarial Attacks on Deep Learning Wi-Fi Fingerprinting

Empowered by deep neural networks (DNNs), Wi-Fi fingerprinting has recen...

A Little Fog for a Large Turn

Small, carefully crafted perturbations called adversarial perturbations ...

An Estimator for the Sensitivity to Perturbations of Deep Neural Networks

For Deep Neural Networks (DNNs) to become useful in safety-critical appl...

Please sign up or login with your details

Forgot password? Click here to reset