PatchFormer: A Versatile 3D Transformer Based on Patch Attention
The 3D vision community is witnesses a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major 3D learning benchmarks. However, existing 3D Transformers need to generate a large attention map, which has quadratic complexity (both in space and time) with respect to input size. To solve this shortcoming, we introduce patch-attention to adaptively learn a much smaller set of bases upon which the attention maps are computed. By a weighted summation upon these bases, patch-attention not only captures the global shape context but also achieves linear complexity to input size. In addition, we propose a lightweight Multi-scale Attention (MSA) block to build attentions among features of different scales, providing the model with multi-scale features. Based on these proposed modules, we construct our neural architecture called PatchFormer. Extensive experiments demonstrate that our network achieves strong accuracy on general 3D recognition tasks with 7.3x speed-up than previous 3D Transformers.
READ FULL TEXT