Federated Clustering via Matrix Factorization Models: From Model Averaging to Gradient Sharing

02/12/2020
by   Shuai Wang, et al.
0

Recently, federated learning (FL) has drawn significant attention due to its capability of training a model over the network without knowing the client's private raw data. In this paper, we study the unsupervised clustering problem under the FL setting. By adopting a generalized matrix factorization model for clustering, we propose two novel (first-order) federated clustering (FedC) algorithms based on principles of model averaging and gradient sharing, respectively, and present their theoretical convergence conditions. We show that both algorithms have a O(1/T) convergence rate, where T is the total number of gradient evaluations per client, and the communication cost can be effectively reduced by controlling the local epoch length and allowing partial client participation within each communication round. Numerical experiments show that the FedC algorithm based on gradient sharing outperforms that based on model averaging, especially in scenarios with non-i.i.d. data, and can perform comparably as or exceed the centralized clustering algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset