Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency

06/03/2023
by   Owen Queen, et al.
0

Interpreting time series models is uniquely challenging because it requires identifying both the location of time series signals that drive model predictions and their matching to an interpretable temporal pattern. While explainers from other modalities can be applied to time series, their inductive biases do not transfer well to the inherently uninterpretable nature of time series. We present TimeX, a time series consistency model for training explainers. TimeX trains an interpretable surrogate to mimic the behavior of a pretrained time series model. It addresses the issue of model faithfulness by introducing model behavior consistency, a novel formulation that preserves relations in the latent space induced by the pretrained model with relations in the latent space induced by TimeX. TimeX provides discrete attribution maps and, unlike existing interpretability methods, it learns a latent space of explanations that can be used in various ways, such as to provide landmarks to visually aggregate similar explanations and easily recognize temporal patterns. We evaluate TimeX on 8 synthetic and real-world datasets and compare its performance against state-of-the-art interpretability methods. We also conduct case studies using physiological time series. Quantitative evaluations demonstrate that TimeX achieves the highest or second-highest performance in every metric compared to baselines across all datasets. Through case studies, we show that the novel components of TimeX show potential for training faithful, interpretable models that capture the behavior of pretrained time series models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2023

On the Consistency and Robustness of Saliency Explanations for Time Series Classification

Interpretable machine learning and explainable artificial intelligence h...
research
06/09/2023

Self-Interpretable Time Series Prediction with Counterfactual Explanations

Interpretable time series prediction is crucial for safety-critical area...
research
06/06/2018

Deep Self-Organization: Interpretable Discrete Representation Learning on Time Series

Human professionals are often required to make decisions based on comple...
research
08/26/2023

Time-to-Pattern: Information-Theoretic Unsupervised Learning for Scalable Time Series Summarization

Data summarization is the process of generating interpretable and repres...
research
01/14/2022

Time Series Generation with Masked Autoencoder

This paper shows that masked autoencoders with interpolators (InterpoMAE...
research
02/10/2023

ShapeWordNet: An Interpretable Shapelet Neural Network for Physiological Signal Classification

Physiological signals are high-dimensional time series of great practica...

Please sign up or login with your details

Forgot password? Click here to reset