Interpretable Self-Aware Neural Networks for Robust Trajectory Prediction

11/16/2022
by   Masha Itkina, et al.
0

Although neural networks have seen tremendous success as predictive models in a variety of domains, they can be overly confident in their predictions on out-of-distribution (OOD) data. To be viable for safety-critical applications, like autonomous vehicles, neural networks must accurately estimate their epistemic or model uncertainty, achieving a level of system self-awareness. Techniques for epistemic uncertainty quantification often require OOD data during training or multiple neural network forward passes during inference. These approaches may not be suitable for real-time performance on high-dimensional inputs. Furthermore, existing methods lack interpretability of the estimated uncertainty, which limits their usefulness both to engineers for further system development and to downstream modules in the autonomy stack. We propose the use of evidential deep learning to estimate the epistemic uncertainty over a low-dimensional, interpretable latent space in a trajectory prediction setting. We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among the semantic concepts: past agent behavior, road structure, and social context. We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines. Our code is available at: https://github.com/sisl/InterpretableSelfAwarePrediction.

READ FULL TEXT
research
09/16/2022

Uncertainty Quantification of Collaborative Detection for Self-Driving

Sharing information between connected and autonomous vehicles (CAVs) fun...
research
08/13/2021

Detecting OODs as datapoints with High Uncertainty

Deep neural networks (DNNs) are known to produce incorrect predictions w...
research
07/19/2021

Epistemic Neural Networks

We introduce the epistemic neural network (ENN) as an interface for unce...
research
11/30/2020

Trajformer: Trajectory Prediction with Local Self-Attentive Contexts for Autonomous Driving

Effective feature-extraction is critical to models' contextual understan...
research
12/01/2022

An introduction to optimization under uncertainty – A short survey

Optimization equips engineers and scientists in a variety of fields with...
research
01/11/2023

How Does Traffic Environment Quantitatively Affect the Autonomous Driving Prediction?

An accurate trajectory prediction is crucial for safe and efficient auto...
research
03/07/2022

Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks

Deep neural networks achieve outstanding results in a large variety of t...

Please sign up or login with your details

Forgot password? Click here to reset