DeepAI AI Chat
Log In Sign Up

Deep Learning Reproducibility and Explainable AI (XAI)

by   A. -M. Leventi-Peetz, et al.

The nondeterminism of Deep Learning (DL) training algorithms and its influence on the explainability of neural network (NN) models are investigated in this work with the help of image classification examples. To discuss the issue, two convolutional neural networks (CNN) have been trained and their results compared. The comparison serves the exploration of the feasibility of creating deterministic, robust DL models and deterministic explainable artificial intelligence (XAI) in practice. Successes and limitation of all here carried out efforts are described in detail. The source code of the attained deterministic models has been listed in this work. Reproducibility is indexed as a development-phase-component of the Model Governance Framework, proposed by the EU within their excellence in AI approach. Furthermore, reproducibility is a requirement for establishing causality for the interpretation of model results and building of trust towards the overwhelming expansion of AI systems applications. Problems that have to be solved on the way to reproducibility and ways to deal with some of them, are examined in this work.


page 4

page 6

page 7

page 8

page 11

page 12


Guaranteeing Reproducibility in Deep Learning Competitions

To encourage the development of methods with reproducible and robust tra...

Explanations for Temporal Recommendations

Recommendation systems are an integral part of Artificial Intelligence (...

Challenges for cognitive decoding using deep learning methods

In cognitive decoding, researchers aim to characterize a brain region's ...

PoTrojan: powerful neural-level trojan designs in deep learning models

With the popularity of deep learning (DL), artificial intelligence (AI) ...

AI visualization in Nanoscale Microscopy

Artificial Intelligence Nanotechnology are promising areas for the f...