Achieving Differential Privacy with Matrix Masking in Big Data
Differential privacy schemes have been widely adopted in recent years to address issues of data privacy protection. We propose a new Gaussian scheme combining with another data protection technique, called random orthogonal matrix masking, to achieve (ε, δ)-differential privacy (DP) more efficiently. We prove that the additional matrix masking significantly reduces the rate of noise variance required in the Gaussian scheme to achieve (ε, δ)-DP in big data setting. Specifically, when ε→ 0, δ→ 0, and the sample size n exceeds the number p of attributes by n/p=O(ln(1/δ)), the required additive noise variance to achieve (ε, δ)-DP is reduced from O(ln(1/δ)/ε^2) to O(1/ε). With much less noise added, the resulting differential privacy protected pseudo data sets allow much more accurate inferences, thus can significantly improve the scope of application for differential privacy.
READ FULL TEXT