Evolved Explainable Classifications for Lymph Node Metastases

05/14/2020
by   Iam Palatnik de Sousa, et al.
2

A novel evolutionary approach for Explainable Artificial Intelligence is presented: the "Evolved Explanations" model (EvEx). This methodology consists in combining Local Interpretable Model Agnostic Explanations (LIME) with Multi-Objective Genetic Algorithms to allow for automated segmentation parameter tuning in image classification tasks. In this case, the dataset studied is Patch-Camelyon, comprised of patches from pathology whole slide images. A publicly available Convolutional Neural Network (CNN) was trained on this dataset to provide a binary classification for presence/absence of lymph node metastatic tissue. In turn, the classifications are explained by means of evolving segmentations, seeking to optimize three evaluation goals simultaneously. The final explanation is computed as the mean of all explanations generated by Pareto front individuals, evolved by the developed genetic algorithm. To enhance reproducibility and traceability of the explanations, each of them was generated from several different seeds, randomly chosen. The observed results show remarkable agreement between different seeds. Despite the stochastic nature of LIME explanations, regions of high explanation weights proved to have good agreement in the heat maps, as computed by pixel-wise relative standard deviations. The found heat maps coincide with expert medical segmentations, which demonstrates that this methodology can find high quality explanations (according to the evaluation metrics), with the novel advantage of automated parameter fine tuning. These results give additional insight into the inner workings of neural network black box decision making for medical data.

READ FULL TEXT

page 1

page 7

page 8

page 10

research
09/07/2022

Explainable Artificial Intelligence to Detect Image Spam Using Convolutional Neural Network

Image spam threat detection has continually been a popular area of resea...
research
11/28/2022

Explaining Deep Convolutional Neural Networks for Image Classification by Evolving Local Interpretable Model-agnostic Explanations

Deep convolutional neural networks have proven their effectiveness, and ...
research
11/11/2022

REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study

Explainable artificial intelligence is proposed to provide explanations ...
research
08/22/2022

Shapelet-Based Counterfactual Explanations for Multivariate Time Series

As machine learning and deep learning models have become highly prevalen...
research
07/21/2021

GLIME: A new graphical methodology for interpretable model-agnostic explanations

Explainable artificial intelligence (XAI) is an emerging new domain in w...
research
08/11/2021

Logic Explained Networks

The large and still increasing popularity of deep learning clashes with ...
research
04/11/2022

Generalizing Adversarial Explanations with Grad-CAM

Gradient-weighted Class Activation Mapping (Grad- CAM), is an example-ba...

Please sign up or login with your details

Forgot password? Click here to reset