Cross-modal Cross-domain Learning for Unsupervised LiDAR Semantic Segmentation

08/05/2023
by   Yiyang Chen, et al.
0

In recent years, cross-modal domain adaptation has been studied on the paired 2D image and 3D LiDAR data to ease the labeling costs for 3D LiDAR semantic segmentation (3DLSS) in the target domain. However, in such a setting the paired 2D and 3D data in the source domain are still collected with additional effort. Since the 2D-3D projections can enable the 3D model to learn semantic information from the 2D counterpart, we ask whether we could further remove the need of source 3D data and only rely on the source 2D images. To answer it, this paper studies a new 3DLSS setting where a 2D dataset (source) with semantic annotations and a paired but unannotated 2D image and 3D LiDAR data (target) are available. To achieve 3DLSS in this scenario, we propose Cross-Modal and Cross-Domain Learning (CoMoDaL). Specifically, our CoMoDaL aims at modeling 1) inter-modal cross-domain distillation between the unpaired source 2D image and target 3D LiDAR data, and 2) the intra-domain cross-modal guidance between the target 2D image and 3D LiDAR data pair. In CoMoDaL, we propose to apply several constraints, such as point-to-pixel and prototype-to-pixel alignments, to associate the semantics in different modalities and domains by constructing mixed samples in two modalities. The experimental results on several datasets show that in the proposed setting, the developed CoMoDaL can achieve segmentation without the supervision of labeled LiDAR data. Ablations are also conducted to provide more analysis. Code will be available publicly.

READ FULL TEXT

page 3

page 8

research
08/12/2023

BEV-DG: Cross-Modal Learning under Bird's-Eye View for Domain Generalization of 3D Semantic Segmentation

Cross-modal Unsupervised Domain Adaptation (UDA) aims to exploit the com...
research
04/23/2023

Walking Your LiDOG: A Journey Through Multiple Domains for LiDAR Semantic Segmentation

The ability to deploy robots that can operate safely in diverse environm...
research
03/21/2022

Drive Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation

This work investigates learning pixel-wise semantic image segmentation i...
research
07/30/2021

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation

Domain adaptation is critical for success when confronting with the lack...
research
04/14/2023

Cross-domain Food Image-to-Recipe Retrieval by Weighted Adversarial Learning

Food image-to-recipe aims to learn an embedded space linking the rich se...
research
04/19/2023

CrossFusion: Interleaving Cross-modal Complementation for Noise-resistant 3D Object Detection

The combination of LiDAR and camera modalities is proven to be necessary...
research
10/27/2022

3D Shape Knowledge Graph for Cross-domain and Cross-modal 3D Shape Retrieval

With the development of 3D modeling and fabrication, 3D shape retrieval ...

Please sign up or login with your details

Forgot password? Click here to reset