Hybrid Federated and Centralized Learning
Many of the machine learning (ML) tasks are focused on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS), which entails huge communication overhead. To overcome this issue, federated learning (FL) has been a promising tool, wherein the clients send only the model updates to the PS instead of the whole dataset. Thus, FL brings the learning task into the edge level, which demands powerful computational resources from the clients. This requirement may not be satisfied in all ML applications due to diversity of the edge devices in terms of computation power. In this work, we propose a hybrid federated and centralized learning (HFCL) framework to effectively train a learning model by exploiting the computational capability of the clients. In HFCL, only the clients who have sufficient resources employ FL while the ones who do not resort to CL by transmitting their local dataset to the PS. We also propose a sequential data transmission approach with HFCL (HFCL-SDT) to sequentially transmit the datasets in order to reduce the duration of the training. The proposed method is advantageous since all the clients collaborate on the learning process regardless of their computational resources. Via numerical simulations, the proposed HFCL scheme is shown to be superior than FL with a moderate communication overhead between FL and CL.
READ FULL TEXT