Catoptric Light can be Dangerous: Effective Physical-World Attack by Natural Phenomenon

09/19/2022
by   Chengyin Hu, et al.
0

Deep neural networks (DNNs) have achieved great success in many tasks. Therefore, it is crucial to evaluate the robustness of advanced DNNs. The traditional methods use stickers as physical perturbations to fool the classifiers, which is difficult to achieve stealthiness and there exists printing loss. Some new types of physical attacks use light beam to perform attacks (e.g., laser, projector), whose optical patterns are artificial rather than natural. In this work, we study a new type of physical attack, called adversarial catoptric light (AdvCL), in which adversarial perturbations are generated by common natural phenomena, catoptric light, to achieve stealthy and naturalistic adversarial attacks against advanced DNNs in physical environments. Carefully designed experiments demonstrate the effectiveness of the proposed method in simulated and real-world environments. The attack success rate is 94.90 environment. We also discuss some of AdvCL's transferability and defense strategy against this attack.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset