Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks

01/04/2021
by   Yanghao Zhang, et al.
0

Although there are a great number of adversarial attacks on deep learning based classifiers, how to attack object detection systems has been rarely studied. In this paper, we propose a Half-Neighbor Masked Projected Gradient Descent (HNM-PGD) based attack, which can generate strong perturbation to fool different kinds of detectors under strict constraints. We also applied the proposed HNM-PGD attack in the CIKM 2020 AnalytiCup Competition, which was ranked within the top 1 https://github.com/YanghaoZYH/HNM-PGD.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset