Toward Efficient Federated Learning in Multi-Channeled Mobile Edge Network with Layerd Gradient Compression

by   Haizhou Du, et al.

A fundamental issue for federated learning (FL) is how to achieve optimal model performance under highly dynamic communication environments. This issue can be alleviated by the fact that modern edge devices usually can connect to the edge FL server via multiple communication channels (e.g., 4G, LTE and 5G). However, having an edge device send copies of local models to the FL server along multiple channels is redundant, time-consuming, and would waste resources (e.g., bandwidth, battery life and monetary cost). In this paper, motivated by the layered coding techniques in video streaming, we propose a novel FL framework called layered gradient compression (LGC). Specifically, in LGC, local gradients from a device is coded into several layers and each layer is sent to the FL server along a different channel. The FL server aggregates the received layers of local gradients from devices to update the global model, and sends the result back to the devices. We prove the convergence of LGC, and formally define the problem of resource-efficient federated learning with LGC. We then propose a learning based algorithm for each device to dynamically adjust its local computation (i.e., the number of local stochastic descent) and communication decisions (i.e.,the compression level of different layers and the layer to channel mapping) in each iteration. Results from extensive experiments show that using our algorithm, LGC significantly reduces the training time, improves the resource utilization, while achieving a similar accuracy, compared with well-known FL mechanisms.


page 1

page 2

page 3

page 4


FedGreen: Federated Learning with Fine-Grained Gradient Compression for Green Mobile Edge Computing

Federated learning (FL) enables devices in mobile edge computing (MEC) t...

Toward efficient resource utilization at edge nodes in federated learning

Federated learning (FL) enables edge nodes to collaboratively contribute...

FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the Server

Despite achieving remarkable performance, Federated Learning (FL) suffer...

Communication Efficient DNN Partitioning-based Federated Learning

Efficiently running federated learning (FL) on resource-constrained devi...

Rate Region for Indirect Multiterminal Source Coding in Federated Learning

One of the main focus in federated learning (FL) is the communication ef...

Partial Variable Training for Efficient On-Device Federated Learning

This paper aims to address the major challenges of Federated Learning (F...

Federated Learning with Flexible Control

Federated learning (FL) enables distributed model training from local da...

Please sign up or login with your details

Forgot password? Click here to reset