Explanation-Based Tuning of Opaque Machine Learners with Application to Paper Recommendation

Research in human-centered AI has shown the benefits of machine-learning systems that can explain their predictions. Methods that allow users to tune a model in response to the explanations are similarly useful. While both capabilities are well-developed for transparent learning models (e.g., linear models and GA2Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, no method currently exists for tuning of opaque models in response to explanations. This paper introduces LIMEADE, a general framework for tuning an arbitrary machine learning model based on an explanation of the model's prediction. We apply our framework to Semantic Sanity, a neural recommender system for scientific papers, and report on a detailed user study, showing that our framework leads to significantly higher perceived user control, trust, and satisfaction.

READ FULL TEXT
research
05/26/2023

Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System

Significant attention has been paid to enhancing recommender systems (RS...
research
03/02/2020

A general framework for scientifically inspired explanations in AI

Explainability in AI is gaining attention in the computer science commun...
research
12/17/2021

Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations

In attempts to "explain" predictions of machine learning models, researc...
research
01/30/2022

Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

Model explanations such as saliency maps can improve user trust in AI by...
research
11/02/2021

Explaining Documents' Relevance to Search Queries

We present GenEx, a generative model to explain search results to users ...
research
01/31/2019

An Evaluation of the Human-Interpretability of Explanation

Recent years have seen a boom in interest in machine learning systems th...
research
09/15/2019

X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust

We present a new explainable AI (XAI) framework aimed at increasing just...

Please sign up or login with your details

Forgot password? Click here to reset