Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future Directions

by   Chulin Xie, et al.
University of Illinois at Urbana-Champaign
Tsinghua University
Carnegie Mellon University

Recent advances in machine learning have enabled its wide application in different domains, and one of the most exciting applications is autonomous vehicles (AVs), which have encouraged the development of a number of ML algorithms from perception to prediction to planning. However, training AVs usually requires a large amount of training data collected from different driving environments (e.g., cities) as well as different types of personal information (e.g., working hours and routes). Such collected large data, treated as the new oil for ML in the data-centric AI era, usually contains a large amount of privacy-sensitive information which is hard to remove or even audit. Although existing privacy protection approaches have achieved certain theoretical and empirical success, there is still a gap when applying them to real-world applications such as autonomous vehicles. For instance, when training AVs, not only can individually identifiable information reveal privacy-sensitive information, but also population-level information such as road construction within a city, and proprietary-level commercial secrets of AVs. Thus, it is critical to revisit the frontier of privacy risks and corresponding protection approaches in AVs to bridge this gap. Following this goal, in this work, we provide a new taxonomy for privacy risks and protection methods in AVs, and we categorize privacy in AVs into three levels: individual, population, and proprietary. We explicitly list out recent challenges to protect each of these levels of privacy, summarize existing solutions to these challenges, discuss the lessons and conclusions, and provide potential future directions and opportunities for both researchers and practitioners. We believe this work will help to shape the privacy research in AV and guide the privacy protection technology design.


page 1

page 2


ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning

When building machine learning models using sensitive data, organization...

Visual Content Privacy Protection: A Survey

Vision is the most important sense for people, and it is also one of the...

Privacy Preservation in Federated Learning: Insights from the GDPR Perspective

Along with the blooming of AI and Machine Learning-based applications an...

A hybrid privacy protection scheme for medical data

Healthcare data contains sensitive information, and it is challenging to...

A Survey on Dataset Distillation: Approaches, Applications and Future Directions

Dataset distillation is attracting more attention in machine learning as...

A new framework for global data regulation

Under the current regulatory framework for data protections, the protect...

Please sign up or login with your details

Forgot password? Click here to reset