Peeking inside the Black Box: Interpreting Deep Learning Models for Exoplanet Atmospheric Retrievals

11/23/2020
by   Kai Hou Yip, et al.
16

Deep learning algorithms are growing in popularity in the field of exoplanetary science due to their ability to model highly non-linear relations and solve interesting problems in a data-driven manner. Several works have attempted to perform fast retrievals of atmospheric parameters with the use of machine learning algorithms like deep neural networks (DNNs). Yet, despite their high predictive power, DNNs are also infamous for being 'black boxes'. It is their apparent lack of explainability that makes the astrophysics community reluctant to adopt them. What are their predictions based on? How confident should we be in them? When are they wrong and how wrong can they be? In this work, we present a number of general evaluation methodologies that can be applied to any trained model and answer questions like these. In particular, we train three different popular DNN architectures to retrieve atmospheric parameters from exoplanet spectra and show that all three achieve good predictive performance. We then present an extensive analysis of the predictions of DNNs, which can inform us - among other things - of the credibility limits for atmospheric parameters for a given instrument and model. Finally, we perform a perturbation-based sensitivity analysis to identify to which features of the spectrum the outcome of the retrieval is most sensitive. We conclude that for different molecules, the wavelength ranges to which the DNN's predictions are most sensitive, indeed coincide with their characteristic absorption regions. The methodologies presented in this work help to improve the evaluation of DNNs and to grant interpretability to their predictions.

READ FULL TEXT

page 10

page 12

page 16

page 28

page 29

page 30

research
06/14/2018

Hierarchical interpretations for neural network predictions

Deep neural networks (DNNs) have achieved impressive predictive performa...
research
08/29/2022

Interpreting Black-box Machine Learning Models for High Dimensional Datasets

Deep neural networks (DNNs) have been shown to outperform traditional ma...
research
02/15/2017

An Analysis of Ability in Deep Neural Networks

Deep neural networks (DNNs) have made significant progress in a number o...
research
05/20/2020

An Adversarial Approach for Explaining the Predictions of Deep Neural Networks

Machine learning models have been successfully applied to a wide range o...
research
06/04/2021

DOCTOR: A Simple Method for Detecting Misclassification Errors

Deep neural networks (DNNs) have shown to perform very well on large sca...
research
06/23/2018

DALEX: explainers for complex predictive models

Predictive modeling is invaded by elastic, yet complex methods such as n...
research
10/01/2019

Leveraging Model Interpretability and Stability to increase Model Robustness

State of the art Deep Neural Networks (DNN) can now achieve above human ...

Please sign up or login with your details

Forgot password? Click here to reset