More than Encoder: Introducing Transformer Decoder to Upsample

06/20/2021
by   Yijiang Li, et al.
0

General segmentation models downsample images and then upsample to restore resolution for pixel level prediction. In such schema, upsample technique is vital in maintaining information for better performance. In this paper, we present a new upsample approach, Attention Upsample (AU), that could serve as general upsample method and be incorporated into any segmentation model that possesses lateral connections. AU leverages pixel-level attention to model long range dependency and global information for better reconstruction. It consists of Attention Decoder (AD) and bilinear upsample as residual connection to complement the upsampled features. AD adopts the idea of decoder from transformer which upsamples features conditioned on local and detailed information from contracting path. Moreover, considering the extensive memory and computation cost of pixel-level attention, we further propose to use window attention scheme to restrict attention computation in local windows instead of global range. Incorporating window attention, we denote our decoder as Window Attention Decoder (WAD) and our upsample method as Window Attention Upsample (WAU). We test our method on classic U-Net structure with lateral connection to deliver information from contracting path and achieve state-of-the-arts performance on Synapse (80.30 DSC and 23.12 HD) and MSD Brain (74.75 DSC) datasets.

READ FULL TEXT

page 12

page 14

page 15

research
08/15/2023

Graph-Segmenter: Graph Transformer with Boundary-aware Attention for Semantic Segmentation

The transformer-based semantic segmentation approaches, which divide the...
research
06/23/2023

Swin-Free: Achieving Better Cross-Window Attention and Efficiency with Size-varying Window

Transformer models have shown great potential in computer vision, follow...
research
03/10/2021

U-Net Transformer: Self and Cross Attention for Medical Image Segmentation

Medical image segmentation remains particularly challenging for complex ...
research
09/22/2019

Pixel-Level Dense Prediction without Decoder

Pixel-level dense prediction tasks such as keypoint estimation are domin...
research
03/06/2023

DwinFormer: Dual Window Transformers for End-to-End Monocular Depth Estimation

Depth estimation from a single image is of paramount importance in the r...
research
05/08/2022

Transformer Tracking with Cyclic Shifting Window Attention

Transformer architecture has been showing its great strength in visual o...
research
05/17/2019

Side Window Filtering

Local windows are routinely used in computer vision and almost without e...

Please sign up or login with your details

Forgot password? Click here to reset