An Improved Algorithm for Clustered Federated Learning

10/20/2022
by   Harshvardhan, et al.
0

In this paper, we address the dichotomy between heterogeneous models and simultaneous training in Federated Learning (FL) via a clustering framework. We define a new clustering model for FL based on the (optimal) local models of the users: two users belong to the same cluster if their local models are close; otherwise they belong to different clusters. A standard algorithm for clustered FL is proposed in <cit.>, called , which requires suitable initialization and the knowledge of hyper-parameters like the number of clusters (which is often quite difficult to obtain in practical applications) to converge. We propose an improved algorithm, Successive Refine Federated Clustering Algorithm (), which removes such restrictive assumptions. treats each user as a singleton cluster as an initialization, and then successively refine the cluster estimation via exploiting similar users belonging to the same cluster. In any intermediate step, uses a robust federated learning algorithm within each cluster to exploit simultaneous training and to correct clustering errors. Furthermore, does not require any good initialization (warm start), both in theory and practice. We show that with proper choice of learning rate, incurs arbitrarily small clustering error. Additionally, we validate the performance of our algorithm on standard FL datasets in non-convex problems like neural nets, and we show the benefits of over baselines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset