Attri-VAE: attribute-based, disentangled and interpretable representations of medical images with variational autoencoders

03/20/2022
by   Irem Cetin, et al.
6

Deep learning (DL) methods where interpretability is intrinsically considered as part of the model are required to better understand the relationship of clinical and imaging-based attributes with DL outcomes, thus facilitating their use in reasoning medical decisions. Latent space representations built with variational autoencoders (VAE) do not ensure individual control of data attributes. Attribute-based methods enforcing attribute disentanglement have been proposed in the literature for classical computer vision tasks in benchmark data. In this paper, we propose a VAE approach, the Attri-VAE, that includes an attribute regularization term to associate clinical and medical imaging attributes with different regularized dimensions in the generated latent space, enabling a better disentangled interpretation of the attributes. Furthermore, the generated attention maps explained the attribute encoding in the regularized latent space dimensions. The Attri-VAE approach analyzed healthy and myocardial infarction patients with clinical, cardiac morphology, and radiomics attributes. The proposed model provided an excellent trade-off between reconstruction fidelity, disentanglement, and interpretability, outperforming state-of-the-art VAE approaches according to several quantitative metrics. The resulting latent space allowed the generation of realistic synthetic data in the trajectory between two distinct input samples or along a specific attribute dimension to better interpret changes between different cardiac conditions.

READ FULL TEXT

page 1

page 3

page 4

page 9

page 10

research
04/22/2020

Polarized-VAE: Proximity Based Disentangled Representation Learning for Text Generation

Learning disentangled representations of real world data is a challengin...
research
07/24/2023

Attribute Regularized Soft Introspective VAE: Towards Cardiac Attribute Regularization Through MRI Domains

Deep generative models have emerged as influential instruments for data ...
research
04/11/2020

Attribute-based Regularization of VAE Latent Spaces

Selective manipulation of data attributes using deep generative models i...
research
08/13/2019

Assessing the Impact of Blood Pressure on Cardiac Function Using Interpretable Biomarkers and Variational Autoencoders

Maintaining good cardiac function for as long as possible is a major con...
research
11/15/2019

Gated Variational AutoEncoders: Incorporating Weak Supervision to Encourage Disentanglement

Variational AutoEncoders (VAEs) provide a means to generate representati...
research
07/15/2020

Learning Invariances for Interpretability using Supervised VAE

We propose to learn model invariances as a means of interpreting a model...
research
06/14/2019

Global and Local Interpretability for Cardiac MRI Classification

Deep learning methods for classifying medical images have demonstrated i...

Please sign up or login with your details

Forgot password? Click here to reset