Cramér-Rao bound-informed training of neural networks for quantitative MRI

by   Xiaoxia Zhang, et al.

Neural networks are increasingly used to estimate parameters in quantitative MRI, in particular in magnetic resonance fingerprinting. Their advantages over the gold standard non-linear least square fitting are their superior speed and their immunity to the non-convexity of many fitting problems. We find, however, that in heterogeneous parameter spaces, i.e. in spaces in which the variance of the estimated parameters varies considerably, good performance is hard to achieve and requires arduous tweaking of the loss function, hyper parameters, and the distribution of the training data in parameter space. Here, we address these issues with a theoretically well-founded loss function: the Cramér-Rao bound (CRB) provides a theoretical lower bound for the variance of an unbiased estimator and we propose to normalize the squared error with respective CRB. With this normalization, we balance the contributions of hard-to-estimate and not-so-hard-to-estimate parameters and areas in parameter space, and avoid a dominance of the former in the overall training loss. Further, the CRB-based loss function equals one for a maximally-efficient unbiased estimator, which we consider the ideal estimator. Hence, the proposed CRB-based loss function provides an absolute evaluation metric. We compare a network trained with the CRB-based loss with a network trained with the commonly used means squared error loss and demonstrate the advantages of the former in numerical, phantom, and in vivo experiments.


page 7

page 10

page 11

page 15


Physics-informed neural networks for myocardial perfusion MRI quantification

Tracer-kinetic models allow for the quantification of kinetic parameters...

Rician likelihood loss for quantitative MRI using self-supervised deep learning

Purpose: Previous quantitative MR imaging studies using self-supervised ...

Element-wise Estimation Error of Generalized Fused Lasso

The main result of this article is that we obtain an elementwise error b...

Illuminant Estimation using Ensembles of Multivariate Regression Trees

White balancing is a fundamental step in the image processing pipeline. ...

Efficient Algorithms for Outlier-Robust Regression

We give the first polynomial-time algorithm for performing linear or pol...

Training recurrent networks online without backtracking

We introduce the "NoBackTrack" algorithm to train the parameters of dyna...

Diagnostic Tool for Out-of-Sample Model Evaluation

Assessment of model fitness is an important step in many problems. Model...

Please sign up or login with your details

Forgot password? Click here to reset