EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual and Language Learning

09/29/2022
by   Yanmin Wu, et al.
13

3D visual grounding aims to find the objects within point clouds mentioned by free-form natural language descriptions with rich semantic components. However, existing methods either extract the sentence-level features coupling all words, or focus more on object names, which would lose the word-level information or neglect other attributes. To alleviate this issue, we present EDA that Explicitly Decouples the textual attributes in a sentence and conducts Dense Alignment between such fine-grained language and point cloud objects. Specifically, we first propose a text decoupling module to produce textual features for every semantic component. Then, we design two losses to supervise the dense matching between two modalities: the textual position alignment and object semantic alignment. On top of that, we further introduce two new visual grounding tasks, locating objects without object names and locating auxiliary objects referenced in the descriptions, both of which can thoroughly evaluate the model's dense alignment capacity. Through experiments, we achieve state-of-the-art performance on two widely-adopted visual grounding datasets , ScanRefer and SR3D/NR3D, and obtain absolute leadership on our two newly-proposed tasks. The code will be available at https://github.com/yanmin-wu/EDA.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset