Communication and Storage Efficient Federated Split Learning

02/11/2023
by   Yujia Mu, et al.
0

Federated learning (FL) is a popular distributed machine learning (ML) paradigm, but is often limited by significant communication costs and edge device computation capabilities. Federated Split Learning (FSL) preserves the parallel model training principle of FL, with a reduced device computation requirement thanks to splitting the ML model between the server and clients. However, FSL still incurs very high communication overhead due to transmitting the smashed data and gradients between the clients and the server in each global round. Furthermore, the server has to maintain separate models for every client, resulting in a significant computation and storage requirement that grows linearly with the number of clients. This paper tries to solve these two issues by proposing a communication and storage efficient federated and split learning (CSE-FSL) strategy, which utilizes an auxiliary network to locally update the client models while keeping only a single model at the server, hence avoiding the communication of gradients from the server and greatly reducing the server resource requirement. Communication cost is further reduced by only sending the smashed data in selected epochs from the clients. We provide a rigorous theoretical analysis of CSE-FSL that guarantees its convergence for non-convex loss functions. Extensive experimental results demonstrate that CSE-FSL has a significant communication reduction over existing FSL techniques while achieving state-of-the-art convergence and model accuracy, using several real-world FL tasks.

READ FULL TEXT
research
04/25/2020

SplitFed: When Federated Learning Meets Split Learning

Federated learning (FL) and split learning (SL) are two recent distribut...
research
02/03/2023

Convergence Analysis of Split Learning on Non-IID Data

Split Learning (SL) is one promising variant of Federated Learning (FL),...
research
03/19/2023

PFSL: Personalized Fair Split Learning with Data Label Privacy for thin clients

The traditional framework of federated learning (FL) requires each clien...
research
07/20/2021

Communication and Computation Reduction for Split Learning using Asynchronous Training

Split learning is a promising privacy-preserving distributed learning sc...
research
12/16/2022

SplitGP: Achieving Both Generalization and Personalization in Federated Learning

A fundamental challenge to providing edge-AI services is the need for a ...
research
07/19/2022

FedNet2Net: Saving Communication and Computations in Federated Learning with Model Growing

Federated learning (FL) is a recently developed area of machine learning...
research
09/19/2021

Splitfed learning without client-side synchronization: Analyzing client-side split network portion size to overall performance

Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL...

Please sign up or login with your details

Forgot password? Click here to reset