Deep Prototypical-Parts Ease Morphological Kidney Stone Identification and are Competitively Robust to Photometric Perturbations

04/08/2023
by   Daniel Flores-Araiza, et al.
19

Identifying the type of kidney stones can allow urologists to determine their cause of formation, improving the prescription of appropriate treatments to diminish future relapses. Currently, the associated ex-vivo diagnosis (known as Morpho-constitutional Analysis, MCA) is time-consuming, expensive and requires a great deal of experience, as it requires a visual analysis component that is highly operator dependant. Recently, machine learning methods have been developed for in-vivo endoscopic stone recognition. Deep Learning (DL) based methods outperform non-DL methods in terms of accuracy but lack explainability. Despite this trade-off, when it comes to making high-stakes decisions, it's important to prioritize understandable Computer-Aided Diagnosis (CADx) that suggests a course of action based on reasonable evidence, rather than a model prescribing a course of action. In this proposal, we learn Prototypical Parts (PPs) per kidney stone subtype, which are used by the DL model to generate an output classification. Using PPs in the classification task enables case-based reasoning explanations for such output, thus making the model interpretable. In addition, we modify global visual characteristics to describe their relevance to the PPs and the sensitivity of our model's performance. With this, we provide explanations with additional information at the sample, class and model levels in contrast to previous works. Although our implementation's average accuracy is lower than state-of-the-art (SOTA) non-interpretable DL models by 1.5 deviation, without adversarial training. Thus, Learning PPs has the potential to create more robust DL models.

READ FULL TEXT

page 2

page 4

page 7

research
06/01/2022

Interpretable Deep Learning Classifier by Detection of Prototypical Parts on Kidney Stones Images

Identifying the type of kidney stones can allow urologists to determine ...
research
02/05/2021

Achieving Explainability for Plant Disease Classification with Disentangled Variational Autoencoders

Agricultural image recognition tasks are becoming increasingly dependent...
research
03/24/2022

Effective Explanations for Entity Resolution Models

Entity resolution (ER) aims at matching records that refer to the same r...
research
02/23/2023

Dermatological Diagnosis Explainability Benchmark for Convolutional Neural Networks

In recent years, large strides have been taken in developing machine lea...
research
03/14/2021

A new interpretable unsupervised anomaly detection method based on residual explanation

Despite the superior performance in modeling complex patterns to address...
research
02/24/2022

AutoCl : A Visual Interactive System for Automatic Deep Learning Classifier Recommendation Based on Models Performance

Nowadays, deep learning (DL) models being increasingly applied to variou...
research
09/17/2018

Probabilistic DL Reasoning with Pinpointing Formulas: A Prolog-based Approach

When modeling real world domains we have to deal with information that i...

Please sign up or login with your details

Forgot password? Click here to reset