Unlearnable Examples: Making Personal Data Unexploitable

01/13/2021
by   Hanxun Huang, et al.
0

The volume of "free" data on the internet has been key to the current success of deep learning. However, it also raises privacy concerns about the unauthorized exploitation of personal data for training commercial models. It is thus crucial to develop methods to prevent unauthorized data exploitation. This paper raises the question: can data be made unlearnable for deep learning models? We present a type of error-minimizing noise that can indeed make training examples unlearnable. Error-minimizing noise is intentionally generated to reduce the error of one or more of the training example(s) close to zero, which can trick the model into believing there is "nothing" to learn from these example(s). The noise is restricted to be imperceptible to human eyes, and thus does not affect normal data utility. We empirically verify the effectiveness of error-minimizing noise in both sample-wise and class-wise forms. We also demonstrate its flexibility under extensive experimental settings and practicability in a case study of face recognition. Our work establishes an important first step towards making personal data unexploitable to deep learning models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2023

Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation

While the use of graph-structured data in various fields is becoming inc...
research
12/04/2022

ConfounderGAN: Protecting Image Data Privacy with Causal Confounder

The success of deep learning is partly attributed to the availability of...
research
05/18/2023

Re-thinking Data Availablity Attacks Against Deep Neural Networks

The unauthorized use of personal data for commercial purposes and the cl...
research
02/23/2021

Oriole: Thwarting Privacy against Trustworthy Deep Learning Models

Deep Neural Networks have achieved unprecedented success in the field of...
research
07/26/2020

Anonymizing Machine Learning Models

There is a known tension between the need to analyze personal data to dr...
research
12/31/2022

Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples

There is a growing interest in developing unlearnable examples (UEs) aga...
research
11/29/2020

Scaling down Deep Learning

Though deep learning models have taken on commercial and political relev...

Please sign up or login with your details

Forgot password? Click here to reset