Ensemble manifold based regularized multi-modal graph convolutional network for cognitive ability prediction

01/20/2021
by   Gang Qu, et al.
14

Objective: Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks. Methods: To take advantage of complementary information from multi-modal fMRI, we propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating the fMRI time series and the functional connectivity (FC) between each pair of brain regions. Specifically, our model learns a graph embedding from individual brain networks derived from multi-modal data. A manifold-based regularization term is then enforced to consider the relationships of subjects both within and between modalities. Furthermore, we propose the gradient-weighted regression activation mapping (Grad-RAM) and the edge mask learning to interpret the model, which is used to identify significant cognition-related biomarkers. Results: We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score. Our model obtains superior predictive performance over GCN with a single modality and other competing approaches. The identified biomarkers are cross-validated from different approaches. Conclusion and Significance: This paper develops a new interpretable graph deep learning framework for cognitive ability prediction, with the potential to overcome the limitations of several current data-fusion models. The results demonstrate the power of MGCN in analyzing multi-modal fMRI and discovering significant biomarkers for human brain studies.

READ FULL TEXT

page 1

page 6

page 7

page 8

research
04/27/2022

Interpretable Graph Convolutional Network of Multi-Modality Brain Imaging for Alzheimer's Disease Diagnosis

Identification of brain regions related to the specific neurological dis...
research
04/01/2020

Mapping individual differences in cortical architecture using multi-view representation learning

In neuroscience, understanding inter-individual differences has recently...
research
05/24/2022

Highly Accurate FMRI ADHD Classification using time distributed multi modal 3D CNNs

This work proposes an algorithm for fMRI data analysis for the classific...
research
06/16/2023

Fusing Structural and Functional Connectivities using Disentangled VAE for Detecting MCI

Brain network analysis is a useful approach to studying human brain diso...
research
05/27/2019

Multi-Modal Graph Interaction for Multi-Graph Convolution Network in Urban Spatiotemporal Forecasting

Graph convolution network based approaches have been recently used to mo...
research
09/20/2021

Dyadformer: A Multi-modal Transformer for Long-Range Modeling of Dyadic Interactions

Personality computing has become an emerging topic in computer vision, d...
research
07/04/2022

Interpretable Fusion Analytics Framework for fMRI Connectivity: Self-Attention Mechanism and Latent Space Item-Response Model

There have been several attempts to use deep learning based on brain fMR...

Please sign up or login with your details

Forgot password? Click here to reset