Don't Get Me Wrong: How to apply Deep Visual Interpretations to Time Series

03/14/2022
by   Christoffer Loeffler, et al.
38

The correct interpretation and understanding of deep learning models is essential in many applications. Explanatory visual interpretation approaches for image and natural language processing allow domain experts to validate and understand almost any deep learning model. However, they fall short when generalizing to arbitrary time series data that is less intuitive and more diverse. Whether a visualization explains the true reasoning or captures the real features is difficult to judge. Hence, instead of blind trust we need an objective evaluation to obtain reliable quality metrics. We propose a framework of six orthogonal metrics for gradient- or perturbation-based post-hoc visual interpretation methods designed for time series classification and segmentation tasks. An experimental study includes popular neural network architectures for time series and nine visual interpretation methods. We evaluate the visual interpretation methods with diverse datasets from the UCR repository and a complex real-world dataset, and study the influence of common regularization techniques during training. We show that none of the methods consistently outperforms any of the others on all metrics while some are ahead at times. Our insights and recommendations allow experts to make informed choices of suitable visualization techniques for the model and task at hand.

READ FULL TEXT

page 5

page 17

research
05/23/2023

Interpretation of Time-Series Deep Models: A Survey

Deep learning models developed for time-series associated tasks have bec...
research
02/08/2018

TSViz: Demystification of Deep Learning Models for Time-Series Analysis

This paper presents a novel framework for demystification of convolution...
research
11/17/2020

Impact of Accuracy on Model Interpretations

Model interpretations are often used in practice to extract real world i...
research
05/23/2023

Towards credible visual model interpretation with path attribution

Originally inspired by game-theory, path attribution framework stands ou...
research
02/11/2022

InterpretTime: a new approach for the systematic evaluation of neural-network interpretability in time series classification

We present a novel approach to evaluate the performance of interpretabil...
research
11/11/2022

Does Deep Learning REALLY Outperform Non-deep Machine Learning for Clinical Prediction on Physiological Time Series?

Machine learning has been widely used in healthcare applications to appr...
research
11/12/2021

Soft Sensing Model Visualization: Fine-tuning Neural Network from What Model Learned

The growing availability of the data collected from smart manufacturing ...

Please sign up or login with your details

Forgot password? Click here to reset