Secure Weighted Aggregation in Federated Learning

10/17/2020
by   Jiale Guo, et al.
0

Federated learning (FL) schemes enable multiple clients to jointly solve a machine learning problem using their local data to train a local model, then aggregating these models under the coordination of a central server. To achieve such a practical FL system, we need to consider (i) how to deal with the disparity across clients' datasets, and (ii) how to further protect the privacy of clients' locally trained models, which may leak information. The first concern can be addressed using a weighted aggregation scheme where the weights of clients are determined based on their data size and quality. Approaches in previous works result in a good performance but do not provide any privacy guarantee. For the second concern, privacy-preserving aggregation schemes can provide privacy guarantees that can be mathematically analyzed. However, the security issue still exists that both the central server and clients may send fraudulent messages to each other for their own benefits, especially if there is an incentive mechanism where the reward provided by the server is distributed according to clients' weights. To address the issues mentioned above, we propose a secure weighted aggregation scheme. Precisely, relying on the homomorphic encryption (HE) crypto-system, each client's weight is calculated in a privacy-preserving manner. Furthermore, we adopt a zero-knowledge proof (ZKP) based verification scheme to prevent the central server and clients from receiving fraudulent messages from each other. To the best of our knowledge, this work proposes the first aggregation scheme to deal with data disparity and fraudulent messages in the FL system from both privacy and security perspective.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset