Knowledge Cross-Distillation for Membership Privacy

11/02/2021
by   Rishav Chourasia, et al.
0

A membership inference attack (MIA) poses privacy risks on the training data of a machine learning model. With an MIA, an attacker guesses if the target data are a member of the training dataset. The state-of-the-art defense against MIAs, distillation for membership privacy (DMP), requires not only private data to protect but a large amount of unlabeled public data. However, in certain privacy-sensitive domains, such as medical and financial, the availability of public data is not obvious. Moreover, a trivial method to generate the public data by using generative adversarial networks significantly decreases the model accuracy, as reported by the authors of DMP. To overcome this problem, we propose a novel defense against MIAs using knowledge distillation without requiring public data. Our experiments show that the privacy protection and accuracy of our defense are comparable with those of DMP for the benchmark tabular datasets used in MIA researches, Purchase100 and Texas100, and our defense has much better privacy-utility trade-off than those of the existing defenses without using public data for image dataset CIFAR10.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2019

Reconciling Utility and Membership Privacy via Knowledge Distillation

Large capacity machine learning models are prone to membership inference...
research
03/06/2023

Students Parrot Their Teachers: Membership Inference on Model Distillation

Model distillation is frequently proposed as a technique to reduce the p...
research
03/10/2022

Membership Privacy Protection for Image Translation Models via Adversarial Knowledge Distillation

Image-to-image translation models are shown to be vulnerable to the Memb...
research
05/05/2023

A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness

The aim of dataset distillation is to encode the rich features of an ori...
research
06/11/2022

NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks

Membership inference attacks (MIAs) against machine learning models can ...
research
03/04/2021

Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods

Machine learning (ML) models used in medical imaging diagnostics can be ...
research
07/04/2023

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction

Machine learning (ML) models are vulnerable to membership inference atta...

Please sign up or login with your details

Forgot password? Click here to reset