Quantifying the Tradeoff Between Cybersecurity and Location Privacy
Previous data breaches that occurred in the mobility sector, such as Uber's data leakage in 2016, lead to privacy concerns over confidentiality and the potential abuse of customer data. To protect customer privacy, location-based service (LBS) providers may have the motivation to adopt privacy preservation mechanisms, such as obfuscating data from vehicles or mobile through a trusted data server. However, the efforts for protecting privacy might be in conflict with those for detecting malicious behaviors or misbehaviors by drivers. The reason is that the accuracy of data about vehicle locations and trajectory is crucial in determining whether a vehicle trip is fabricated by adversaries, especially when machine learning methods are adopted by LBS for this purpose. This paper tackles this dilemma situation by evaluating the tradeoff between location privacy and security. Specifically, vehicle trips are obfuscated with 2D Laplace noise that meets the requirement of differential privacy. The obfuscated vehicle trips are then fed into a benchmark Recurrent Neural Network (RNN) that is widely used for detecting anomalous trips. This allows us to investigate the influence of the privacy-preservation technique on model performance. The experiment results suggest that applying Laplace mechanism to achieve high-level of differential privacy in the context of location-based vehicle trips will result in low true-positive or high false-negative rate by the RNN, which is reflected in the area under the curve scores (less than 0.7), which diminishes the value of RNN as more anomalous trips will be classified as normal ones.
READ FULL TEXT