Multimodal In-bed Pose and Shape Estimation under the Blankets

12/12/2020
by   Yu Yin, et al.
0

Humans spend vast hours in bed – about one-third of the lifetime on average. Besides, a human at rest is vital in many healthcare applications. Typically, humans are covered by a blanket when resting, for which we propose a multimodal approach to uncover the subjects so their bodies at rest can be viewed without the occlusion of the blankets above. We propose a pyramid scheme to effectively fuse the different modalities in a way that best leverages the knowledge captured by the multimodal sensors. Specifically, the two most informative modalities (i.e., depth and infrared images) are first fused to generate good initial pose and shape estimation. Then pressure map and RGB images are further fused one by one to refine the result by providing occlusion-invariant information for the covered part, and accurate shape information for the uncovered part, respectively. However, even with multimodal data, the task of detecting human bodies at rest is still very challenging due to the extreme occlusion of bodies. To further reduce the negative effects of the occlusion from blankets, we employ an attention-based reconstruction module to generate uncovered modalities, which are further fused to update current estimation via a cyclic fashion. Extensive experiments validate the superiority of the proposed model over others.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 7

page 8

research
08/20/2020

Simultaneously-Collected Multimodal Lying Pose Dataset: Towards In-Bed Human Pose Monitoring under Adverse Vision Conditions

Computer vision (CV) has achieved great success in interpreting semantic...
research
04/02/2020

Bodies at Rest: 3D Human Pose and Shape Estimation from a Pressure Image using Synthetic Data

People spend a substantial part of their lives at rest in bed. 3D human ...
research
11/18/2018

Multimodal Densenet

Humans make accurate decisions by interpreting complex data from multipl...
research
02/28/2020

Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields

Current methods for depth map prediction from monocular images tend to p...
research
04/05/2023

Explaining Multimodal Data Fusion: Occlusion Analysis for Wilderness Mapping

Jointly harnessing complementary features of multi-modal input data in a...
research
03/18/2023

Just Noticeable Visual Redundancy Forecasting: A Deep Multimodal-driven Approach

Just noticeable difference (JND) refers to the maximum visual change tha...

Please sign up or login with your details

Forgot password? Click here to reset