Self-supervised 3D Semantic Representation Learning for Vision-and-Language Navigation

01/26/2022
by   Sinan Tan, et al.
0

In the Vision-and-Language Navigation task, the embodied agent follows linguistic instructions and navigates to a specific goal. It is important in many practical scenarios and has attracted extensive attention from both computer vision and robotics communities. However, most existing works only use RGB images but neglect the 3D semantic information of the scene. To this end, we develop a novel self-supervised training framework to encode the voxel-level 3D semantic reconstruction into a 3D semantic representation. Specifically, a region query task is designed as the pretext task, which predicts the presence or absence of objects of a particular class in a specific 3D region. Then, we construct an LSTM-based navigation model and train it with the proposed 3D semantic representations and BERT language features on vision-language pairs. Experiments show that the proposed approach achieves success rates of 68 66 respectively, which are superior to most of RGB-based methods utilizing vision-language transformers.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset