DeepObfuscator: Adversarial Training Framework for Privacy-Preserving Image Classification

09/09/2019
by   Ang Li, et al.
21

Deep learning has been widely utilized in many computer vision applications and achieved remarkable commercial success. However, running deep learning models on mobile devices is generally challenging due to limitation of the available computing resources. It is common to let the users send their service requests to cloud servers that run the large-scale deep learning models to process. Sending the data associated with the service requests to the cloud, however, impose risks on the user data privacy. Some prior arts proposed sending the features extracted from raw data (e.g., images) to the cloud. Unfortunately, these extracted features can still be exploited by attackers to recover raw images and to infer embedded private attributes (e.g., age, gender, etc.). In this paper, we propose an adversarial training framework DeepObfuscator that can prevent extracted features from being utilized to reconstruct raw images and infer private attributes, while retaining the useful information for the intended cloud service (i.e., image classification). DeepObfuscator includes a learnable encoder, namely, obfuscator that is designed to hide privacy-related sensitive information from the features by performingour proposed adversarial training algorithm. Our experiments on CelebAdataset show that the quality of the reconstructed images fromthe obfuscated features of the raw image is dramatically decreased from 0.9458 to 0.3175 in terms of multi-scale structural similarity (MS-SSIM). The person in the reconstructed image, hence, becomes hardly to be re-identified. The classification accuracy of the inferred private attributes that can be achieved by the attacker drops down to a random-guessing level, e.g., the accuracy of gender is reduced from 97.36 intended classification tasks performed via the cloud service drops by only 2

READ FULL TEXT

page 2

page 6

page 7

research
05/23/2020

TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework with Anonymized Intermediate Representations

The success of deep learning partially benefits from the availability of...
research
01/25/2019

Better accuracy with quantified privacy: representations learned via reconstructive adversarial network

The remarkable success of machine learning, especially deep learning, ha...
research
03/05/2022

Training privacy-preserving video analytics pipelines by suppressing features that reveal information about private attributes

Deep neural networks are increasingly deployed for scene analytics, incl...
research
12/07/2018

Privacy Partitioning: Protecting User Data During the Deep Learning Inference Phase

We present a practical method for protecting data during the inference p...
research
06/08/2020

Privacy Adversarial Network: Representation Learning for Mobile Data Privacy

The remarkable success of machine learning has fostered a growing number...
research
07/29/2020

Privacy-preserving Voice Analysis via Disentangled Representations

Voice User Interfaces (VUIs) are increasingly popular and built into sma...
research
12/18/2019

Preventing Information Leakage with Neural Architecture Search

Powered by machine learning services in the cloud, numerous learning-dri...

Please sign up or login with your details

Forgot password? Click here to reset