Energy-Efficient Federated Edge Learning with Joint Communication and Computation Design

by   Xiaopeng Mo, et al.

This paper studies a federated edge learning system, in which an edge server coordinates a set of edge devices to train a shared machine learning model based on their locally distributed data samples. During the distributed training, we exploit the joint communication and computation design for improving the system energy efficiency, in which both the communication resource allocation for global ML parameters aggregation and the computation resource allocation for locally updating MLparameters are jointly optimized. In particular, we consider two transmission protocols for edge devices to upload ML parameters to edge server, based on the non orthogonal multiple access and time division multiple access, respectively. Under both protocols, we minimize the total energy consumption at all edge devices over a particular finite training duration subject to a given training accuracy, by jointly optimizing the transmission power and rates at edge devices for uploading MLparameters and their central processing unit frequencies for local update. We propose efficient algorithms to optimally solve the formulated energy minimization problems by using the techniques from convex optimization. Numerical results show that as compared to other benchmark schemes, our proposed joint communication and computation design significantly improves the energy efficiency of the federated edge learning system, by properly balancing the energy tradeoff between communication and computation.


page 1

page 2

page 3

page 4


Threshold-Based Data Exclusion Approach for Energy-Efficient Federated Edge Learning

Federated edge learning (FEEL) is a promising distributed learning techn...

Sequencing and Scheduling for Multi-User Machine-Type Communication

In this paper, we propose joint sequencing and scheduling optimization f...

D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless Network Edge

Mobile edge learning is an emerging technique that enables distributed e...

Wireless Distributed Edge Learning: How Many Edge Devices Do We Need?

We consider distributed machine learning at the wireless edge, where a p...

Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between Convergence and Power Transfer

Federated edge learning (FEEL) is a widely adopted framework for trainin...

Optimizing Pipelined Computation and Communication for Latency-Constrained Edge Learning

Consider a device that is connected to an edge processor via a communica...

Fine-Grained Data Selection for Improved Energy Efficiency of Federated Edge Learning

In Federated edge learning (FEEL), energy-constrained devices at the net...

Please sign up or login with your details

Forgot password? Click here to reset