Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond

by   Xuhong Li, et al.

Deep neural networks have been well-known for their superb performance in handling various machine learning and artificial intelligence tasks. However, due to their over-parameterized black-box nature, it is often difficult to understand the prediction results of deep models. In recent years, many interpretation tools have been proposed to explain or reveal the ways that deep models make decisions. In this paper, we review this line of research and try to make a comprehensive survey. Specifically, we introduce and clarify two basic concepts-interpretations and interpretability-that people usually get confused. First of all, to address the research efforts in interpretations, we elaborate the design of several recent interpretation algorithms, from different perspectives, through proposing a new taxonomy. Then, to understand the results of interpretation, we also survey the performance metrics for evaluating interpretation algorithms. Further, we summarize the existing work in evaluating models' interpretability using "trustworthy" interpretation algorithms. Finally, we review and discuss the connections between deep models' interpretations and other factors, such as adversarial robustness and data augmentations, and we introduce several open-source libraries for interpretation algorithms and evaluation approaches.


A Survey on Neural Network Interpretability

Along with the great success of deep neural networks, there is also grow...

Influence Tuning: Demoting Spurious Correlations via Instance Attribution and Instance-Driven Updates

Among the most critical limitations of deep learning NLP models are thei...

DuTrust: A Sentiment Analysis Dataset for Trustworthiness Evaluation

While deep learning models have greatly improved the performance of most...

A Comparative Study of Faithfulness Metrics for Model Interpretability Methods

Interpretation methods to reveal the internal reasoning processes behind...

The Logic Traps in Evaluating Post-hoc Interpretations

Post-hoc interpretation aims to explain a trained model and reveal how t...

Please sign up or login with your details

Forgot password? Click here to reset