Linear Speedup in Personalized Collaborative Learning
Personalization in federated learning can improve the accuracy of a model for a user by trading off the model's bias (introduced by using data from other users who are potentially different) against its variance (due to the limited amount of data on any single user). In order to develop training algorithms that optimally balance this trade-off, it is necessary to extend our theoretical foundations. In this work, we formalize the personalized collaborative learning problem as stochastic optimization of a user's objective f_0(x) while given access to N related but different objectives of other users {f_1(x), …, f_N(x)}. We give convergence guarantees for two algorithms in this setting – a popular personalization method known as weighted gradient averaging, and a novel bias correction method – and explore conditions under which we can optimally trade-off their bias for a reduction in variance and achieve linear speedup w.r.t. the number of users N. Further, we also empirically study their performance confirming our theoretical insights.
READ FULL TEXT