In-Distribution Interpretability for Challenging Modalities

07/01/2020
by   Cosmas Heiß, et al.
16

It is widely recognized that the predictions of deep neural networks are difficult to parse relative to simpler approaches. However, the development of methods to investigate the mode of operation of such models has advanced rapidly in the past few years. Recent work introduced an intuitive framework which utilizes generative models to improve on the meaningfulness of such explanations. In this work, we display the flexibility of this method to interpret diverse and challenging modalities: music and physical simulations of urban environments.

READ FULL TEXT

page 3

page 4

research
10/29/2019

Weight of Evidence as a Basis for Human-Oriented Explanations

Interpretability is an elusive but highly sought-after characteristic of...
research
11/14/2016

On the Quantitative Analysis of Decoder-Based Generative Models

The past several years have seen remarkable progress in generative model...
research
11/07/2016

Joint Multimodal Learning with Deep Generative Models

We investigate deep generative models that can exchange multiple modalit...
research
06/20/2018

Towards Robust Interpretability with Self-Explaining Neural Networks

Most recent work on interpretability of complex machine learning models ...
research
08/13/2018

Learning Explanations from Language Data

PatternAttribution is a recent method, introduced in the vision domain, ...
research
12/30/2020

Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA

While research on explaining predictions of open-domain QA systems (ODQA...

Please sign up or login with your details

Forgot password? Click here to reset