Re-thinking Data Availablity Attacks Against Deep Neural Networks

by   Bin Fang, et al.

The unauthorized use of personal data for commercial purposes and the clandestine acquisition of private data for training machine learning models continue to raise concerns. In response to these issues, researchers have proposed availability attacks that aim to render data unexploitable. However, many current attack methods are rendered ineffective by adversarial training. In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective. Building on these observations, we introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements. We have conducted extensive experiments to substantiate the soundness of our approach. Moreover, our method establishes a robust foundation for future research in this area.


page 1

page 2

page 3

page 4


Robust Neural Networks using Randomized Adversarial Training

Since the discovery of adversarial examples in machine learning, researc...

The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models

Protecting personal data against the exploitation of machine learning mo...

Unlearnable Examples: Making Personal Data Unexploitable

The volume of "free" data on the internet has been key to the current su...

Lagrangian Objective Function Leads to Improved Unforeseen Attack Generalization in Adversarial Training

Recent improvements in deep learning models and their practical applicat...

AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware Robust Adversarial Training

Monocular 3D object detection plays a pivotal role in the field of auton...

ROMark: A Robust Watermarking System Using Adversarial Training

The availability and easy access to digital communication increase the r...

Please sign up or login with your details

Forgot password? Click here to reset