Interpretable Deep Neural Networks for Patient Mortality Prediction: A Consensus-based Approach

05/14/2019
by   Shaeke Salman, et al.
0

Deep neural networks have achieved remarkable success in challenging tasks. However, the black-box approach of training and testing of such networks is not acceptable to critical applications. In particular, the existence of adversarial examples and their overgeneralization to irrelevant inputs makes it difficult, if not impossible, to explain decisions by commonly used neural networks. In this paper, we analyze the underlying mechanism of generalization of deep neural networks and propose an (n, k) consensus algorithm to be insensitive to adversarial examples and at the same time be able to reject irrelevant samples. Furthermore, the consensus algorithm is able to improve classification accuracy by using multiple trained deep neural networks. To handle the complexity of deep neural networks, we cluster linear approximations and use cluster means to capture feature importance. Due to weight symmetry, a small number of clusters are sufficient to produce a robust interpretation. Experimental results on a health dataset show the effectiveness of our algorithm in enhancing the prediction accuracy and interpretability of deep neural network models on one-year patient mortality prediction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2019

DeepSearch: Simple and Effective Blackbox Fuzzing of Deep Neural Networks

Although deep neural networks have been successful in image classificati...
research
01/19/2019

Overfitting Mechanism and Avoidance in Deep Neural Networks

Assisted by the availability of data and high performance computing, dee...
research
11/27/2018

Using Attribution to Decode Dataset Bias in Neural Network Models for Chemistry

Deep neural networks have achieved state of the art accuracy at classify...
research
11/20/2019

Logic-inspired Deep Neural Networks

Deep neural networks have achieved impressive performance and become de-...
research
11/26/2017

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

Deep neural networks have proven remarkably effective at solving many cl...
research
02/25/2021

Retrieval Augmentation to Improve Robustness and Interpretability of Deep Neural Networks

Deep neural network models have achieved state-of-the-art results in var...
research
07/02/2018

Elastic Neural Networks: A Scalable Framework for Embedded Computer Vision

We propose a new framework for image classification with deep neural net...

Please sign up or login with your details

Forgot password? Click here to reset