Spot Evasion Attacks: Adversarial Examples for License Plate Recognition Systems with Convolution Neural Networks

by   Ya-guan Qian, et al.

Recent studies have shown convolution neural networks (CNNs) for image recognition are vulnerable to evasion attacks with carefully manipulated adversarial examples. Previous work primarily focused on how to generate adversarial examples closed to source images, by introducing pixel-level perturbations into the whole or specific part of images. In this paper, we propose an evasion attack on CNN classifiers in the context of License Plate Recognition (LPR), which adds predetermined perturbations to specific regions of license plate images, simulating some sort of naturally formed spots (such as sludge, etc.). Therefore, the problem is modeled as an optimization process searching for optimal perturbation positions, which is different from previous work that consider pixel values as decision variables. Notice that this is a complex nonlinear optimization problem, and we use a genetic-algorithm based approach to obtain optimal perturbation positions. In experiments, we use the proposed algorithm to generate various adversarial examples in the form of rectangle, circle, ellipse and spots cluster. Experimental results show that these adversarial examples are almost ignored by human eyes, but can fool HyperLPR with high attack success rate over 93 this kind of spot evasion attacks would pose a great threat to current LPR systems, and needs to be investigated further by the security community.


page 6

page 13

page 17

page 18

page 20

page 21

page 22

page 23


Adversarial Examples Detection beyond Image Space

Deep neural networks have been proved that they are vulnerable to advers...

Adversarial Example in Remote Sensing Image Recognition

With the wide application of remote sensing technology in various fields...

A geometry-inspired decision-based attack

Deep neural networks have recently achieved tremendous success in image ...

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing

Deep neural networks (DNNs) have achieved great success in various appli...

Interpreting Adversarial Examples by Activation Promotion and Suppression

It is widely known that convolutional neural networks (CNNs) are vulnera...

Adversarial examples by perturbing high-level features in intermediate decoder layers

We propose a novel method for creating adversarial examples. Instead of ...

Exploring the Space of Adversarial Images

Adversarial examples have raised questions regarding the robustness and ...

Please sign up or login with your details

Forgot password? Click here to reset