TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues

by   Dylan Slack, et al.

Machine Learning (ML) models are increasingly used to make critical decisions in real-world applications, yet they have also become more complex, making them harder to understand. To this end, several techniques to explain model predictions have been proposed. However, practitioners struggle to leverage explanations because they often do not know which to use, how to interpret the results, and may have insufficient data science experience to obtain explanations. In addition, most current works focus on generating one-shot explanations and do not allow users to follow up and ask fine-grained questions about the explanations, which can be frustrating. In this work, we address these challenges by introducing TalkToModel: an open-ended dialogue system for understanding machine learning models. Specifically, TalkToModel comprises three key components: 1) a natural language interface for engaging in dialogues, making understanding ML models highly accessible, 2) a dialogue engine that adapts to any tabular model and dataset, interprets natural language, maps it to appropriate operations (e.g., feature importance explanations, counterfactual explanations, showing model errors), and generates text responses, and 3) an execution component that run the operations and ensures explanations are accurate. We carried out quantitative and human subject evaluations of TalkToModel. We found the system understands user questions on novel datasets and models with high accuracy, demonstrating the system's capacity to generalize to new situations. In human evaluations, 73 healthcare workers (e.g., doctors and nurses) agreed they would use TalkToModel over baseline point-and-click systems, and 84.6 TalkToModel was easier to use.


page 2

page 6

page 7


MaNtLE: Model-agnostic Natural Language Explainer

Understanding the internal reasoning behind the predictions of machine l...

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

As practitioners increasingly deploy machine learning models in critical...

Feature Attributions and Counterfactual Explanations Can Be Manipulated

As machine learning models are increasingly used in critical decision-ma...

ViCE: Visual Counterfactual Explanations for Machine Learning Models

The continued improvements in the predictive accuracy of machine learnin...

SUBPLEX: Towards a Better Understanding of Black Box Model Explanations at the Subpopulation Level

Understanding the interpretation of machine learning (ML) models has bee...

Tree-based local explanations of machine learning model predictions, AraucanaXAI

Increasingly complex learning methods such as boosting, bagging and deep...

DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations

The ability for a human to understand an Artificial Intelligence (AI) mo...

Please sign up or login with your details

Forgot password? Click here to reset