New Secure Sparse Inner Product with Applications to Machine Learning

10/16/2022
by   Guowen Xu, et al.
0

Sparse inner product (SIP) has the attractive property of overhead being dominated by the intersection of inputs between parties, independent of the actual input size. It has intriguing prospects, especially for boosting machine learning on large-scale data, which is tangled with sparse data. In this paper, we investigate privacy-preserving SIP problems that have rarely been explored before. Specifically, we propose two concrete constructs, one requiring offline linear communication which can be amortized across queries, while the other has sublinear overhead but relies on the more computationally expensive tool. Our approach exploits state-of-the-art cryptography tools including garbled Bloom filters (GBF) and Private Information Retrieval (PIR) as the cornerstone, but carefully fuses them to obtain non-trivial overhead reductions. We provide formal security analysis of the proposed constructs and implement them into representative machine learning algorithms including k-nearest neighbors, naive Bayes classification and logistic regression. Compared to the existing efforts, our method achieves 2-50× speedup in runtime and up to 10× reduction in communication.

READ FULL TEXT

page 11

page 12

page 13

page 14

research
11/03/2016

PrivLogit: Efficient Privacy-preserving Logistic Regression by Tailoring Numerical Optimizers

Safeguarding privacy in machine learning is highly desirable, especially...
research
12/03/2022

Efficiency Boosting of Secure Cross-platform Recommender Systems over Sparse Data

Fueled by its successful commercialization, the recommender system (RS) ...
research
12/05/2019

Trident: Efficient 4PC Framework for Privacy Preserving Machine Learning

Machine learning has started to be deployed in fields such as healthcare...
research
02/15/2016

Secure Approximation Guarantee for Cryptographically Private Empirical Risk Minimization

Privacy concern has been increasingly important in many machine learning...
research
02/17/2019

Private Inner Product Retrieval for Distributed Machine Learning

In this paper, we argue that in many basic algorithms for machine learni...
research
10/16/2022

VerifyML: Obliviously Checking Model Fairness Resilient to Malicious Model Holder

In this paper, we present VerifyML, the first secure inference framework...
research
08/17/2021

On the Complexity of Two-Party Differential Privacy

In distributed differential privacy, the parties perform analysis over t...

Please sign up or login with your details

Forgot password? Click here to reset