When Homomorphic Cryptosystem Meets Differential Privacy: Training Machine Learning Classifier with Privacy Protection

12/06/2018
by   Xiangyun Tang, et al.
0

Machine learning (ML) classifiers are invaluable building blocks that have been used in many fields. High quality training dataset collected from multiple data providers is essential to train accurate classifiers. However, it raises concern about data privacy due to potential leakage of sensitive information in training dataset. Existing studies have proposed many solutions to privacy-preserving training of ML classifiers, but it remains a challenging task to strike a balance among accuracy, computational efficiency, and security. In this paper, we propose Heda, an efficient privacypreserving scheme for training ML classifiers. By combining homomorphic cryptosystem (HC) with differential privacy (DP), Heda obtains the tradeoffs between efficiency and accuracy, and enables flexible switch among different tradeoffs by parameter tuning. In order to make such combination efficient and feasible, we present novel designs based on both HC and DP: A library of building blocks based on partially HC are proposed to construct complex training algorithms without introducing a trusted thirdparty or computational relaxation; A set of theoretical methods are proposed to determine appropriate privacy budget and to reduce sensitivity. Security analysis demonstrates that our solution can construct complex ML training algorithm securely. Extensive experimental results show the effectiveness and efficiency of the proposed scheme.

READ FULL TEXT

page 1

page 10

research
09/21/2020

Privacy-Preserving Machine Learning Training in Aggregation Scenarios

To develop Smart City, the growing popularity of Machine Learning (ML) t...
research
12/26/2022

Packing Privacy Budget Efficiently

Machine learning (ML) models can leak information about users, and diffe...
research
05/01/2022

A New Dimensionality Reduction Method Based on Hensel's Compression for Privacy Protection in Federated Learning

Differential privacy (DP) is considered a de-facto standard for protecti...
research
02/21/2022

Personalized PATE: Differential Privacy for Machine Learning with Individual Privacy Guarantees

Applying machine learning (ML) to sensitive domains requires privacy pro...
research
06/19/2013

Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers

Machine Learning (ML) algorithms are used to train computers to perform ...
research
10/20/2020

Image Obfuscation for Privacy-Preserving Machine Learning

Privacy becomes a crucial issue when outsourcing the training of machine...
research
03/20/2021

DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation

Recent success of deep neural networks (DNNs) hinges on the availability...

Please sign up or login with your details

Forgot password? Click here to reset