Binarizing Split Learning for Data Privacy Enhancement and Computation Reduction

by   Ngoc Duy Pham, et al.

Split learning (SL) enables data privacy preservation by allowing clients to collaboratively train a deep learning model with the server without sharing raw data. However, SL still has limitations such as potential data privacy leakage and high computation at clients. In this study, we propose to binarize the SL local layers for faster computation (up to 17.5 times less forward-propagation time in both training and inference phases on mobile devices) and reduced memory usage (up to 32 times less memory and bandwidth requirements). More importantly, the binarized SL (B-SL) model can reduce privacy leakage from SL smashed data with merely a small degradation in model accuracy. To further enhance the privacy preservation, we also propose two novel approaches: 1) training with additional local leak loss and 2) applying differential privacy, which could be integrated separately or concurrently into the B-SL model. Experimental results with different datasets have affirmed the advantages of the B-SL models compared with several benchmark models. The effectiveness of B-SL models against feature-space hijacking attack (FSHA) is also illustrated. Our results have demonstrated B-SL models are promising for lightweight IoT/mobile applications with high privacy-preservation requirements such as mobile healthcare applications.


page 1

page 2

page 8

page 9

page 10


Split Learning without Local Weight Sharing to Enhance Client-side Data Privacy

Split learning (SL) aims to protect user data privacy by splitting deep ...

FedVS: Straggler-Resilient and Privacy-Preserving Vertical Federated Learning for Split Models

In a vertical federated learning (VFL) system consisting of a central se...

Can We Use Split Learning on 1D CNN Models for Privacy Preserving Training?

A new collaborative learning, called split learning, was recently introd...

Advancements of federated learning towards privacy preservation: from federated learning to split learning

In the distributed collaborative machine learning (DCML) paradigm, feder...

No Peek: A Survey of private distributed deep learning

We survey distributed deep learning models for training or inference wit...

Clustering Label Inference Attack against Practical Split Learning

Split learning is deemed as a promising paradigm for privacy-preserving ...

Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information

Split learning and inference propose to run training/inference of a larg...

Please sign up or login with your details

Forgot password? Click here to reset