Adaptively Multi-view and Temporal Fusing Transformer for 3D Human Pose Estimation

10/11/2021
by   Hui Shuai, et al.
0

In practical application, 3D Human Pose Estimation (HPE) is facing with several variable elements, involving the number of views, the length of the video sequence, and whether using camera calibration. To this end, we propose a unified framework named Multi-view and Temporal Fusing Transformer (MTF-Transformer) to adaptively handle varying view numbers and video length without calibration. MTF-Transformer consists of Feature Extractor, Multi-view Fusing Transformer (MFT), and Temporal Fusing Transformer (TFT). Feature Extractor estimates the 2D pose from each image and encodes the predicted coordinates and confidence into feature embedding for further 3D pose inference. It discards the image features and focuses on lifting the 2D pose into the 3D pose, making the subsequent modules computationally lightweight enough to handle videos. MFT fuses the features of a varying number of views with a relative-attention block. It adaptively measures the implicit relationship between each pair of views and reconstructs the features. TFT aggregates the features of the whole sequence and predicts 3D pose via a transformer, which is adaptive to the length of the video and takes full advantage of the temporal information. With these modules, MTF-Transformer handles different application scenes, varying from a monocular-single-image to multi-view-video, and the camera calibration is avoidable. We demonstrate quantitative and qualitative results on the Human3.6M, TotalCapture, and KTH Multiview Football II. Compared with state-of-the-art methods with camera parameters, experiments show that MTF-Transformer not only obtains comparable results but also generalizes well to dynamic capture with an arbitrary number of unseen views. Code is available in https://github.com/lelexx/MTF-Transformer.

READ FULL TEXT

page 1

page 3

page 4

page 7

page 10

research
09/09/2023

Probabilistic Triangulation for Uncalibrated Multi-View 3D Human Pose Estimation

3D human pose estimation has been a long-standing challenge in computer ...
research
05/25/2022

VTP: Volumetric Transformer for Multi-view Multi-person 3D Pose Estimation

This paper presents Volumetric Transformer Pose estimator (VTP), the fir...
research
05/25/2023

EgoHumans: An Egocentric 3D Multi-Human Benchmark

We present EgoHumans, a new multi-view multi-human video benchmark to ad...
research
10/24/2022

Video based Object 6D Pose Estimation using Transformers

We introduce a Transformer based 6D Object Pose Estimation framework Vid...
research
01/07/2020

Deep Reinforcement Learning for Active Human Pose Estimation

Most 3d human pose estimation methods assume that input – be it images o...
research
09/17/2021

GraFormer: Graph Convolution Transformer for 3D Pose Estimation

Exploiting relations among 2D joints plays a crucial role yet remains se...
research
04/10/2023

Monocular 3D Human Pose Estimation for Sports Broadcasts using Partial Sports Field Registration

The filming of sporting events projects and flattens the movement of ath...

Please sign up or login with your details

Forgot password? Click here to reset