You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing

12/24/2020
by   Abdul Rafey Aftab, et al.
0

Sophisticated user interaction in the automotive industry is a fast emerging topic. Mid-air gestures and speech already have numerous applications for driver-car interaction. Additionally, multimodal approaches are being developed to leverage the use of multiple sensors for added advantages. In this paper, we propose a fast and practical multimodal fusion method based on machine learning for the selection of various control modules in an automotive vehicle. The modalities taken into account are gaze, head pose and finger pointing gesture. Speech is used only as a trigger for fusion. Single modality has previously been used numerous times for recognition of the user's pointing direction. We, however, demonstrate how multiple inputs can be fused together to enhance the recognition performance. Furthermore, we compare different deep neural network architectures against conventional Machine Learning methods, namely Support Vector Regression and Random Forests, and show the enhancements in the pointing direction accuracy using deep learning. The results suggest a great potential for the use of multimodal inputs that can be applied to more use cases in the vehicle.

READ FULL TEXT

page 1

page 3

page 6

page 7

research
07/26/2021

Multimodal Fusion Using Deep Learning Applied to Driver's Referencing of Outside-Vehicle Objects

There is a growing interest in more intelligent natural user interaction...
research
02/15/2022

Multimodal Driver Referencing: A Comparison of Pointing to Objects Inside and Outside the Vehicle

Advanced in-cabin sensing technologies, especially vision based approach...
research
11/03/2021

ML-PersRef: A Machine Learning-based Personalized Multimodal Fusion Approach for Referencing Outside Objects From a Moving Vehicle

Over the past decades, the addition of hundreds of sensors to modern veh...
research
11/07/2022

Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces

With the recently increasing capabilities of modern vehicles, novel appr...
research
09/23/2020

Studying Person-Specific Pointing and Gaze Behavior for Multimodal Referencing of Outside Objects from a Moving Vehicle

Hand pointing and eye gaze have been extensively investigated in automot...
research
04/10/2022

A Comparative Analysis of Decision-Level Fusion for Multimodal Driver Behaviour Understanding

Visual recognition inside the vehicle cabin leads to safer driving and m...
research
05/30/2021

A Brief Survey on Interactive Automotive UI

Automotive User Interface (AutoUI) is relatively a new discipline in the...

Please sign up or login with your details

Forgot password? Click here to reset