OctFormer: Octree-based Transformers for 3D Point Clouds

05/04/2023
by   Peng-Shuai Wang, et al.
0

We propose octree-based transformers, named OctFormer, for 3D point cloud learning. OctFormer can not only serve as a general and effective backbone for 3D point cloud segmentation and object detection but also have linear complexity and is scalable for large-scale point clouds. The key challenge in applying transformers to point clouds is reducing the quadratic, thus overwhelming, computation complexity of attentions. To combat this issue, several works divide point clouds into non-overlapping windows and constrain attentions in each local window. However, the point number in each window varies greatly, impeding the efficient execution on GPU. Observing that attentions are robust to the shapes of local windows, we propose a novel octree attention, which leverages sorted shuffled keys of octrees to partition point clouds into local windows containing a fixed number of points while permitting shapes of windows to change freely. And we also introduce dilated octree attention to expand the receptive field further. Our octree attention can be implemented in 10 lines of code with open-sourced libraries and runs 17 times faster than other point cloud attentions when the point number exceeds 200k. Built upon the octree attention, OctFormer can be easily scaled up and achieves state-of-the-art performances on a series of 3D segmentation and detection benchmarks, surpassing previous sparse-voxel-based CNNs and point cloud transformers in terms of both efficiency and effectiveness. Notably, on the challenging ScanNet200 dataset, OctFormer outperforms sparse-voxel-based CNNs by 7.3 in mIoU. Our code and trained models are available at https://wang-ps.github.io/octformer.

READ FULL TEXT
research
10/13/2022

SWFormer: Sparse Window Transformer for 3D Object Detection in Point Clouds

3D object detection in point clouds is a core component for modern robot...
research
01/20/2023

FlatFormer: Flattened Window Attention for Efficient Point Cloud Transformer

Transformer, as an alternative to CNN, has been proven effective in many...
research
05/23/2023

Hierarchical Adaptive Voxel-guided Sampling for Real-time Applications in Large-scale Point Clouds

While point-based neural architectures have demonstrated their efficacy,...
research
03/21/2023

CurveCloudNet: Processing Point Clouds with 1D Structure

Modern depth sensors such as LiDAR operate by sweeping laser-beams acros...
research
07/31/2022

CloudAttention: Efficient Multi-Scale Attention Scheme For 3D Point Cloud Learning

Processing 3D data efficiently has always been a challenge. Spatial oper...
research
01/15/2023

DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets

Designing an efficient yet deployment-friendly 3D backbone to handle spa...
research
02/28/2023

Applying Plain Transformers to Real-World Point Clouds

Due to the lack of inductive bias, transformer-based models usually requ...

Please sign up or login with your details

Forgot password? Click here to reset