An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning

02/14/2023
by   Shenghui Li, et al.
0

Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process, where malicious participants may upload arbitrary local updates to the central server to degrade the performance of the global model. In recent years, several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients and improve the robustness of federated learning. These solutions were claimed to be Byzantine-robust, under certain assumptions. Other than that, new attack strategies are emerging, striving to circumvent the defense schemes. However, there is a lack of systematic comparison and empirical study thereof. In this paper, we conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning, FedSGD and FedAvg . We first survey existing Byzantine attack strategies and Byzantine-robust aggregation schemes that aim to defend against Byzantine attacks. We also propose a new scheme, ClippedClustering , to enhance the robustness of a clustering-based scheme by automatically clipping the updates. Then we provide an experimental evaluation of eight aggregation schemes in the scenario of five different Byzantine attacks. Our results show that these aggregation schemes sustain relatively high accuracy in some cases but are ineffective in others. In particular, our proposed ClippedClustering successfully defends against most attacks under independent and IID local datasets. However, when the local datasets are Non-IID, the performance of all the aggregation schemes significantly decreases. With Non-IID data, some of these aggregation schemes fail even in the complete absence of Byzantine clients. We conclude that the robustness of all the aggregation schemes is limited, highlighting the need for new defense strategies, in particular for Non-IID datasets.

READ FULL TEXT
research
03/29/2023

A Byzantine-Resilient Aggregation Scheme for Federated Learning via Matrix Autoregression on Client Updates

In this work, we propose FLANDERS, a novel federated learning (FL) aggre...
research
09/11/2019

Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

Federated learning enables training collaborative machine learning model...
research
07/26/2021

LEGATO: A LayerwisE Gradient AggregaTiOn Algorithm for Mitigating Byzantine Attacks in Federated Learning

Federated learning has arisen as a mechanism to allow multiple participa...
research
11/08/2021

BARFED: Byzantine Attack-Resistant Federated Averaging Based on Outlier Elimination

In federated learning, each participant trains its local model with its ...
research
08/01/2021

A Decentralized Federated Learning Framework via Committee Mechanism with Convergence Guarantee

Federated learning allows multiple participants to collaboratively train...
research
08/27/2022

BOBA: Byzantine-Robust Federated Learning with Label Skewness

In federated learning, most existing techniques for robust aggregation a...
research
08/21/2022

Byzantines can also Learn from History: Fall of Centered Clipping in Federated Learning

The increasing popularity of the federated learning framework due to its...

Please sign up or login with your details

Forgot password? Click here to reset