Explainable Techniques for Analyzing Flow Cytometry Cell Transformers

by   Florian Kowarsch, et al.

Explainability for Deep Learning Models is especially important for clinical applications, where decisions of automated systems have far-reaching consequences. While various post-hoc explainable methods, such as attention visualization and saliency maps, already exist for common data modalities, including natural language and images, little work has been done to adapt them to the modality of Flow CytoMetry (FCM) data. In this work, we evaluate the usage of a transformer architecture called ReluFormer that ease attention visualization as well as we propose a gradient- and an attention-based visualization technique tailored for FCM. We qualitatively evaluate the visualization techniques for cell classification and polygon regression on pediatric Acute Lymphoblastic Leukemia (ALL) FCM samples. The results outline the model's decision process and demonstrate how to utilize the proposed techniques to inspect the trained model. The gradient-based visualization not only identifies cells that are most significant for a particular prediction but also indicates the directions in the FCM feature space in which changes have the most impact on the prediction. The attention visualization provides insights on the transformer's decision process when handling FCM data. We show that different attention heads specialize by attending to different biologically meaningful sub-populations in the data, even though the model retrieved solely supervised binary classification signals during training.


page 2

page 7

page 11


Towards Interpretable Attention Networks for Cervical Cancer Analysis

Recent advances in deep learning have enabled the development of automat...

Causality for Inherently Explainable Transformers: CAT-XPLAIN

There have been several post-hoc explanation approaches developed to exp...

Automated Identification of Cell Populations in Flow Cytometry Data with Transformers

Acute Lymphoblastic Leukemia (ALL) is the most frequent hematologic mali...

AttentionViz: A Global View of Transformer Attention

Transformer models are revolutionizing machine learning, but their inner...

Towards Prediction Explainability through Sparse Communication

Explainability is a topic of growing importance in NLP. In this work, we...

Attention Flows for General Transformers

In this paper, we study the computation of how much an input token in a ...

TSViz: Demystification of Deep Learning Models for Time-Series Analysis

This paper presents a novel framework for demystification of convolution...

Please sign up or login with your details

Forgot password? Click here to reset