Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield

06/19/2019
by   Dou Goodman, et al.
8

Many recent works demonstrated that Deep Learning models are vulnerable to adversarial examples.Fortunately, generating adversarial examples usually requires white-box access to the victim model, and the attacker can only access the APIs opened by cloud platforms. Thus, keeping models in the cloud can usually give a (false) sense of security.Unfortunately, cloud-based image classification service is not robust to simple transformations such as Gaussian Noise, Salt-and-Pepper Noise, Rotation and Monochromatization. In this paper,(1) we propose one novel attack method called Image Fusion(IF) attack, which achieve a high bypass rate,can be implemented only with OpenCV and is difficult to defend; and (2) we make the first attempt to conduct an extensive empirical study of Simple Transformation (ST) attacks against real-world cloud-based classification services. Through evaluations on four popular cloud platforms including Amazon, Google, Microsoft, Clarifai, we demonstrate that ST attack has a success rate of approximately 100 50 services. (3) We discuss the possible defenses to address these security challenges.Experiments show that our defense technology can effectively defend known ST attacks.

READ FULL TEXT

page 4

page 5

page 9

page 11

page 12

page 15

research
01/04/2019

Adversarial Examples versus Cloud-based Detectors: A Black-box Empirical Study

Deep learning has been broadly leveraged by major cloud providers such a...
research
10/27/2022

Isometric 3D Adversarial Examples in the Physical World

3D deep learning models are shown to be as vulnerable to adversarial exa...
research
05/20/2021

Simple Transparent Adversarial Examples

There has been a rise in the use of Machine Learning as a Service (MLaaS...
research
10/03/2019

BUZz: BUffer Zones for defending adversarial examples in image classification

We propose a novel defense against all existing gradient based adversari...
research
09/08/2020

Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective

Deep Learning algorithms have achieved the state-of-the-art performance ...
research
02/22/2022

Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era

Facial Liveness Verification (FLV) is widely used for identity authentic...
research
04/27/2023

Greybox Penetration Testing on Cloud Access Control with IAM Modeling and Deep Reinforcement Learning

Identity and Access Management (IAM) is an access control service in clo...

Please sign up or login with your details

Forgot password? Click here to reset