Secure Multilayer Perceptron Based On Homomorphic Encryption
In this work, we propose an outsourced Secure Multilayer Perceptron (SMLP) scheme where privacy and confidentiality of both the data and the model are ensured during the training and the classification phases. More clearly, this SMLP : i) can be trained by a cloud server based on data previously outsourced by a user in an homomorphically encrypted form; ii) its parameters are homomorphically encrypted giving thus no clues to the cloud; and iii) it can also be used for classifying new encrypted data sent by the user returning him the encrypted classification result encrypted. The originality of this scheme is threefold. To the best of our knowledge, it is the first multilayer perceptron (MLP) secured in its training phase over homomorphically encrypted data with no problem of convergence. And It does not require extra-communications between the server and the user. It is based on the Rectified Linear Unit (ReLU) activation function that we secure with no approximation contrarily to actual SMLP solutions. To do so, we take advantage of two semi-honest non-colluding servers. Experimental results carried out on a binary database encrypted with the Paillier cryptosystem demonstrate the overall performance of our scheme and its convergence.
READ FULL TEXT