Contextual Model Aggregation for Fast and Robust Federated Learning in Edge Computing

by   Hung T. Nguyen, et al.

Federated learning is a prime candidate for distributed machine learning at the network edge due to the low communication complexity and privacy protection among other attractive properties. However, existing algorithms face issues with slow convergence and/or robustness of performance due to the considerable heterogeneity of data distribution, computation and communication capability at the edge. In this work, we tackle both of these issues by focusing on the key component of model aggregation in federated learning systems and studying optimal algorithms to perform this task. Particularly, we propose a contextual aggregation scheme that achieves the optimal context-dependent bound on loss reduction in each round of optimization. The aforementioned context-dependent bound is derived from the particular participating devices in that round and an assumption on smoothness of the overall loss function. We show that this aggregation leads to a definite reduction of loss function at every round. Furthermore, we can integrate our aggregation with many existing algorithms to obtain the contextual versions. Our experimental results demonstrate significant improvements in convergence speed and robustness of the contextual versions compared to the original algorithms. We also consider different variants of the contextual aggregation and show robust performance even in the most extreme settings.


page 1

page 8


Hierarchical Personalized Federated Learning Over Massive Mobile Edge Computing Networks

Personalized Federated Learning (PFL) is a new Federated Learning (FL) p...

Model-Agnostic Round-Optimal Federated Learning via Knowledge Transfer

Federated learning enables multiple parties to collaboratively learn a m...

FedPDC:Federated Learning for Public Dataset Correction

As people pay more and more attention to privacy protection, Federated L...

Fast-Convergent Federated Learning

Federated learning has emerged recently as a promising solution for dist...

A Convergence Theory for Federated Average: Beyond Smoothness

Federated learning enables a large amount of edge computing devices to l...

ELFISH: Resource-Aware Federated Learning on Heterogeneous Edge Devices

In this work, we propose ELFISH - a resource-aware federated learning fr...

Aggregation in the Mirror Space (AIMS): Fast, Accurate Distributed Machine Learning in Military Settings

Distributed machine learning (DML) can be an important capability for mo...

Please sign up or login with your details

Forgot password? Click here to reset