Uncertainty-aware Multi-modal Learning via Cross-modal Random Network Prediction

07/22/2022
by   Hu Wang, et al.
0

Multi-modal learning focuses on training models by equally combining multiple input data modalities during the prediction process. However, this equal combination can be detrimental to the prediction accuracy because different modalities are usually accompanied by varying levels of uncertainty. Using such uncertainty to combine modalities has been studied by a couple of approaches, but with limited success because these approaches are either designed to deal with specific classification or segmentation problems and cannot be easily translated into other tasks, or suffer from numerical instabilities. In this paper, we propose a new Uncertainty-aware Multi-modal Learner that estimates uncertainty by measuring feature density via Cross-modal Random Network Prediction (CRNP). CRNP is designed to require little adaptation to translate between different prediction tasks, while having a stable training process. From a technical point of view, CRNP is the first approach to explore random network prediction to estimate uncertainty and to combine multi-modal data. Experiments on two 3D multi-modal medical image segmentation tasks and three 2D multi-modal computer vision classification tasks show the effectiveness, adaptability and robustness of CRNP. Also, we provide an extensive discussion on different fusion functions and visualization to validate the proposed model.

READ FULL TEXT
research
07/26/2023

Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling

The missing modality issue is critical but non-trivial to be solved by m...
research
03/24/2023

Evidence-aware multi-modal data fusion and its application to total knee replacement prediction

Deep neural networks have been widely studied for predicting a medical c...
research
04/21/2021

Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings

Reliability of machine learning (ML) systems is crucial in safety-critic...
research
12/13/2021

AMSER: Adaptive Multi-modal Sensing for Energy Efficient and Resilient eHealth Systems

eHealth systems deliver critical digital healthcare and wellness service...
research
04/09/2018

HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation

Recently, dense connections have attracted substantial attention in comp...
research
07/23/2021

Multi-Modal Pedestrian Detection with Large Misalignment Based on Modal-Wise Regression and Multi-Modal IoU

The combined use of multiple modalities enables accurate pedestrian dete...
research
04/30/2019

Cross-Modal Message Passing for Two-stream Fusion

Processing and fusing information among multi-modal is a very useful tec...

Please sign up or login with your details

Forgot password? Click here to reset