Extending the Relative Seriality Formalism for Interpretable Deep Learning of Normal Tissue Complication Probability Models

11/25/2021
by   Tahir I. Yusufaly, et al.
0

We formally demonstrate that the relative seriality model of Kallman, et al. maps exactly onto a simple type of convolutional neural network. This approach leads to a natural interpretation of feedforward connections in the convolutional layer and stacked intermediate pooling layers in terms of bystander effects and hierarchical tissue organization, respectively. These results serve as proof-of-principle for radiobiologically interpretable deep learning of normal tissue complication probability using large-scale imaging and dosimetry datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2019

Multiparametric Deep Learning Tissue Signatures for Muscular Dystrophy: Preliminary Results

A current clinical challenge is identifying limb girdle muscular dystrop...
research
10/14/2020

Deep Learning in Ultrasound Elastography Imaging

It is known that changes in the mechanical properties of tissues are ass...
research
11/03/2021

On the Effectiveness of Interpretable Feedforward Neural Network

Deep learning models have achieved state-of-the-art performance in many ...
research
05/10/2017

Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images

Automated classification of histopathological whole-slide images (WSI) o...
research
10/28/2009

Artificial Immune Tissue using Self-Orgamizing Networks

As introduced by Bentley et al. (2005), artificial immune systems (AIS) ...

Please sign up or login with your details

Forgot password? Click here to reset