Quantifying the Impact of Label Noise on Federated Learning

11/15/2022
by   Shuqi Ke, et al.
0

Federated Learning (FL) is a distributed machine learning paradigm where clients collaboratively train a model using their local (human-generated) datasets while preserving privacy. While existing studies focus on FL algorithm development to tackle data heterogeneity across clients, the important issue of data quality (e.g., label noise) in FL is overlooked. This paper aims to fill this gap by providing a quantitative study on the impact of label noise on FL. Theoretically speaking, we derive an upper bound for the generalization error that is linear in the clients' label noise level. Empirically speaking, we conduct experiments on MNIST and CIFAR-10 datasets using various FL algorithms. We show that the global model accuracy linearly decreases as the noise level increases, which is consistent with our theoretical analysis. We further find that label noise slows down the convergence of FL training, and the global model tends to overfit when the noise level is high.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset