Distillation with Contrast is All You Need for Self-Supervised Point Cloud Representation Learning

02/09/2022
by   Kexue Fu, et al.
0

In this paper, we propose a simple and general framework for self-supervised point cloud representation learning. Human beings understand the 3D world by extracting two levels of information and establishing the relationship between them. One is the global shape of an object, and the other is the local structures of it. However, few existing studies in point cloud representation learning explored how to learn both global shapes and local-to-global relationships without a specified network architecture. Inspired by how human beings understand the world, we utilize knowledge distillation to learn both global shape information and the relationship between global shape and local structures. At the same time, we combine contrastive learning with knowledge distillation to make the teacher network be better updated. Our method achieves the state-of-the-art performance on linear classification and multiple other downstream tasks. Especially, we develop a variant of ViT for 3D point cloud feature extraction, which also achieves comparable results with existing backbones when combined with our framework, and visualization of the attention maps show that our model does understand the point cloud by combining the global shape information and multiple local structural information, which is consistent with the inspiration of our representation learning method. Our code will be released soon.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2022

Complete-to-Partial 4D Distillation for Self-Supervised Point Cloud Sequence Representation Learning

Recent work on 4D point cloud sequences has attracted a lot of attention...
research
03/29/2020

Global-Local Bidirectional Reasoning for Unsupervised Representation Learning of 3D Point Clouds

Local and global patterns of an object are closely related. Although eac...
research
07/29/2022

Global-Local Self-Distillation for Visual Representation Learning

The downstream accuracy of self-supervised methods is tightly linked to ...
research
11/22/2022

PointCMC: Cross-Modal Multi-Scale Correspondences Learning for Point Cloud Understanding

Some self-supervised cross-modal learning approaches have recently demon...
research
05/22/2023

Contrastive Predictive Autoencoders for Dynamic Point Cloud Self-Supervised Learning

We present a new self-supervised paradigm on point cloud sequence unders...
research
08/25/2023

Self-Supervised Representation Learning with Cross-Context Learning between Global and Hypercolumn Features

Whilst contrastive learning yields powerful representations by matching ...
research
03/30/2021

Denoise and Contrast for Category Agnostic Shape Completion

In this paper, we present a deep learning model that exploits the power ...

Please sign up or login with your details

Forgot password? Click here to reset