Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning

09/05/2021
by   Yusen Wu, et al.
34

Adversarial attacks attempt to disrupt the training, retraining and utilizing of artificial intelligence and machine learning models in large-scale distributed machine learning systems. This causes security risks on its prediction outcome. For example, attackers attempt to poison the model by either presenting inaccurate misrepresentative data or altering the models' parameters. In addition, Byzantine faults including software, hardware, network issues occur in distributed systems which also lead to a negative impact on the prediction outcome. In this paper, we propose a novel distributed training algorithm, partial synchronous stochastic gradient descent (ParSGD), which defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate the effectiveness of our algorithm under three common adversarial attacks again the ML models and a Byzantine fault during the training phase. Our results show that using ParSGD, ML models can still produce accurate predictions as if it is not being attacked nor having failures at all when almost half of the nodes are being compromised or failed. We will report the experimental evaluations of ParSGD in comparison with other algorithms.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

page 8

page 9

page 10

research
11/06/2019

The Threat of Adversarial Attacks on Machine Learning in Network Security – A Survey

Machine learning models have made many decision support systems to be fa...
research
04/20/2023

Byzantine-Resilient Learning Beyond Gradients: Distributing Evolutionary Search

Modern machine learning (ML) models are capable of impressive performanc...
research
03/01/2018

Localizing Faults in Cloud Systems

By leveraging large clusters of commodity hardware, the Cloud offers gre...
research
10/12/2022

Self-stabilization and byzantine tolerance for maximal independt setb ELF-STABILIZATION

We analyze the impact of transient and Byzantine faults on the construct...
research
10/29/2022

Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

In distributed learning systems, robustness issues may arise from two so...
research
02/25/2022

Attacks and Faults Injection in Self-Driving Agents on the Carla Simulator – Experience Report

Machine Learning applications are acknowledged at the foundation of auto...
research
08/23/2019

Adversary-resilient Inference and Machine Learning: From Distributed to Decentralized

While the last few decades have witnessed a huge body of work devoted to...

Please sign up or login with your details

Forgot password? Click here to reset