SDLFormer: A Sparse and Dense Locality-enhanced Transformer for Accelerated MR Image Reconstruction

by   Rahul G. S., et al.

Transformers have emerged as viable alternatives to convolutional neural networks owing to their ability to learn non-local region relationships in the spatial domain. The self-attention mechanism of the transformer enables transformers to capture long-range dependencies in the images, which might be desirable for accelerated MRI image reconstruction as the effect of undersampling is non-local in the image domain. Despite its computational efficiency, the window-based transformers suffer from restricted receptive fields as the dependencies are limited to within the scope of the image windows. We propose a window-based transformer network that integrates dilated attention mechanism and convolution for accelerated MRI image reconstruction. The proposed network consists of dilated and dense neighborhood attention transformers to enhance the distant neighborhood pixel relationship and introduce depth-wise convolutions within the transformer module to learn low-level translation invariant features for accelerated MRI image reconstruction. The proposed model is trained in a self-supervised manner. We perform extensive experiments for multi-coil MRI acceleration for coronal PD, coronal PDFS and axial T2 contrasts with 4x and 5x under-sampling in self-supervised learning based on k-space splitting. We compare our method against other reconstruction architectures and the parallel domain self-supervised learning baseline. Results show that the proposed model exhibits improvement margins of (i) around 1.40 dB in PSNR and around 0.028 in SSIM on average over other architectures (ii) around 1.44 dB in PSNR and around 0.029 in SSIM over parallel domain self-supervised learning. The code is available at


page 7

page 8


Multi-head Cascaded Swin Transformers with Attention to k-space Sampling Pattern for Accelerated MRI Reconstruction

Global correlations are widely seen in human anatomical structures due t...

Self-Supervised Learning for MRI Reconstruction with a Parallel Network Training Framework

Image reconstruction from undersampled k-space data plays an important r...

HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction

In accelerated MRI reconstruction, the anatomy of a patient is recovered...

DSFormer: A Dual-domain Self-supervised Transformer for Accelerated Multi-contrast MRI Reconstruction

Multi-contrast MRI (MC-MRI) captures multiple complementary imaging moda...

Learning Dynamic MRI Reconstruction with Convolutional Network Assisted Reconstruction Swin Transformer

Dynamic magnetic resonance imaging (DMRI) is an effective imaging tool f...

Towards End-to-End Image Compression and Analysis with Transformers

We propose an end-to-end image compression and analysis model with Trans...

GA-HQS: MRI reconstruction via a generically accelerated unfolding approach

Deep unfolding networks (DUNs) are the foremost methods in the realm of ...

Please sign up or login with your details

Forgot password? Click here to reset