Defending Label Inference Attacks in Split Learning under Regression Setting

by   Haoze Qiu, et al.
Zhejiang University

As a privacy-preserving method for implementing Vertical Federated Learning, Split Learning has been extensively researched. However, numerous studies have indicated that the privacy-preserving capability of Split Learning is insufficient. In this paper, we primarily focus on label inference attacks in Split Learning under regression setting, which are mainly implemented through the gradient inversion method. To defend against label inference attacks, we propose Random Label Extension (RLE), where labels are extended to obfuscate the label information contained in the gradients, thereby preventing the attacker from utilizing gradients to train an attack model that can infer the original labels. To further minimize the impact on the original task, we propose Model-based adaptive Label Extension (MLE), where original labels are preserved in the extended labels and dominate the training process. The experimental results show that compared to the basic defense methods, our proposed defense methods can significantly reduce the attack model's performance while preserving the original task's performance.


page 1

page 2

page 3

page 4


Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning

Split learning is a popular technique used to perform vertical federated...

Defense Against Gradient Leakage Attacks via Learning to Obscure Data

Federated learning is considered as an effective privacy-preserving lear...

Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks

Split learning of deep neural networks (SplitNN) has provided a promisin...

Clustering Label Inference Attack against Practical Split Learning

Split learning is deemed as a promising paradigm for privacy-preserving ...

Label Inference Attack against Split Learning under Regression Setting

As a crucial building block in vertical Federated Learning (vFL), Split ...

Making Split Learning Resilient to Label Leakage by Potential Energy Loss

As a practical privacy-preserving learning method, split learning has dr...

Please sign up or login with your details

Forgot password? Click here to reset