On Assessing Driver Awareness of Situational Criticalities: Multi-modal Bio-sensing and Vision-based Analysis, Evaluations and Insights
Automobiles for our roadways are increasingly utilizing advanced driver assistance systems. These changes require us to develop novel perception systems not only for accurately understanding the situations and context of these vehicles, but also to understand the awareness of the driver in differentiating between safe and critical situations. The research presented in this paper is focused on this specific problem. Even after the development of wearable and compact multi-modal bio-sensing systems in recent years, their application in driver awareness context has been scarcely explored. The capability of simultaneously recording different kinds of bio-sensing data in addition to traditionally used computer vision systems provide exciting opportunities to explore the limitations of these modalities. In this work, we explore the applications of three different bio-sensing modalities namely electroencephalogram (EEG), photoplethysmogram (PPG) and galvanic skin response (GSR) along with a camera-based vision system in driver awareness context. We assess the information from these sensors independently and together using both signal processing and deep learning tools. We show that our methods outperform previously reported studies in the context of monitoring driver awareness and detecting hazardous/non-hazardous situations for short time scales of two-seconds. We verify our methods by collecting user data on twelve subjects for two real-world driving datasets among which one is publicly available (KITTI dataset) and one that we collect ourselves (LISA dataset) with the vehicle being driven in autonomous mode. This work presents an exhaustive evaluation of multiple sensor modalities on two different datasets for attention monitoring and hazardous events classification.
READ FULL TEXT