Transformer-Encoder Detector Module: Using Context to Improve Robustness to Adversarial Attacks on Object Detection

11/13/2020
by   Faisal Alamri, et al.
15

Deep neural network approaches have demonstrated high performance in object recognition (CNN) and detection (Faster-RCNN) tasks, but experiments have shown that such architectures are vulnerable to adversarial attacks (FFF, UAP): low amplitude perturbations, barely perceptible by the human eye, can lead to a drastic reduction in labeling performance. This article proposes a new context module, called Transformer-Encoder Detector Module, that can be applied to an object detector to (i) improve the labeling of object instances; and (ii) improve the detector's robustness to adversarial attacks. The proposed model achieves higher mAP, F1 scores and AUC average score of up to 13% compared to the baseline Faster-RCNN detector, and an mAP score 8 points higher on images subjected to FFF or UAP attacks due to the inclusion of both contextual and visual features extracted from scene and encoded into the model. The result demonstrates that a simple ad-hoc context module can improve the reliability of object detectors significantly.

READ FULL TEXT

page 4

page 5

page 6

page 7

research
10/31/2019

Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

We present a systematic study of adversarial attacks on state-of-the-art...
research
10/25/2022

Object recognition in atmospheric turbulence scenes

The influence of atmospheric turbulence on acquired surveillance imagery...
research
12/13/2022

CNN-transformer mixed model for object detection

Object detection, one of the three main tasks of computer vision, has be...
research
10/24/2021

ADC: Adversarial attacks against object Detection that evade Context consistency checks

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversar...
research
07/11/2020

Understanding Object Detection Through An Adversarial Lens

Deep neural networks based object detection models have revolutionized c...
research
12/08/2020

Using Feature Alignment can Improve Clean Average Precision and Adversarial Robustness in Object Detection

The 2D object detection in clean images has been a well studied topic, b...
research
11/28/2022

Attack on Unfair ToS Clause Detection: A Case Study using Universal Adversarial Triggers

Recent work has demonstrated that natural language processing techniques...

Please sign up or login with your details

Forgot password? Click here to reset