Reducing Communication for Split Learning by Randomized Top-k Sparsification

by   Fei Zheng, et al.
Zhejiang University

Split learning is a simple solution for Vertical Federated Learning (VFL), which has drawn substantial attention in both research and application due to its simplicity and efficiency. However, communication efficiency is still a crucial issue for split learning. In this paper, we investigate multiple communication reduction methods for split learning, including cut layer size reduction, top-k sparsification, quantization, and L1 regularization. Through analysis of the cut layer size reduction and top-k sparsification, we further propose randomized top-k sparsification, to make the model generalize and converge better. This is done by selecting top-k elements with a large probability while also having a small probability to select non-top-k elements. Empirical results show that compared with other communication-reduction methods, our proposed randomized top-k sparsification achieves a better model performance under the same compression level.


page 5

page 10


Detailed comparison of communication efficiency of split learning and federated learning

We compare communication efficiencies of two compelling distributed mach...

Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners

Vertical Federated Learning (VFL) refers to the collaborative training o...

FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients

In classical federated learning, the clients contribute to the overall t...

Quantum Split Neural Network Learning using Cross-Channel Pooling

In recent years, quantum has been attracted by various fields such as qu...

SplitFed resilience to packet loss: Where to split, that is the question

Decentralized machine learning has broadened its scope recently with the...

Snackjack: A toy model of blackjack

Snackjack is a highly simplified version of blackjack that was proposed ...

Server-Side Local Gradient Averaging and Learning Rate Acceleration for Scalable Split Learning

In recent years, there have been great advances in the field of decentra...

Please sign up or login with your details

Forgot password? Click here to reset