Making Split Learning Resilient to Label Leakage by Potential Energy Loss

by   Fei Zheng, et al.

As a practical privacy-preserving learning method, split learning has drawn much attention in academia and industry. However, its security is constantly being questioned since the intermediate results are shared during training and inference. In this paper, we focus on the privacy leakage problem caused by the trained split model, i.e., the attacker can use a few labeled samples to fine-tune the bottom model, and gets quite good performance. To prevent such kind of privacy leakage, we propose the potential energy loss to make the output of the bottom model become a more `complicated' distribution, by pushing outputs of the same class towards the decision boundary. Therefore, the adversary suffers a large generalization error when fine-tuning the bottom model with only a few leaked labeled samples. Experiment results show that our method significantly lowers the attacker's fine-tuning accuracy, making the split model more resilient to label leakage.


Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information

Split learning and inference propose to run training/inference of a larg...

Does Prompt-Tuning Language Model Ensure Privacy?

Prompt-tuning has received attention as an efficient tuning method in th...

Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning

Split learning is a popular technique used to perform vertical federated...

Defending Label Inference Attacks in Split Learning under Regression Setting

As a privacy-preserving method for implementing Vertical Federated Learn...

Clustering Label Inference Attack against Practical Split Learning

Split learning is deemed as a promising paradigm for privacy-preserving ...

On-Device Model Fine-Tuning with Label Correction in Recommender Systems

To meet the practical requirements of low latency, low cost, and good pr...

Split-U-Net: Preventing Data Leakage in Split Learning for Collaborative Multi-Modal Brain Tumor Segmentation

Split learning (SL) has been proposed to train deep learning models in a...

Please sign up or login with your details

Forgot password? Click here to reset