Robust Trajectory Prediction against Adversarial Attacks

07/29/2022
by   Yulong Cao, et al.
12

Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46 cost of only 3 trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21 data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset