Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy

09/12/2022
by   Ana Lucic, et al.
12

Model explainability has become an important problem in machine learning (ML) due to the increased effect that algorithmic predictions have on humans. Explanations can help users understand not only why ML models make certain predictions, but also how these predictions can be changed. In this thesis, we examine the explainability of ML models from three vantage points: algorithms, users, and pedagogy, and contribute several novel solutions to the explainability problem.

READ FULL TEXT
research
06/28/2022

Explaining Any ML Model? – On Goals and Capabilities of XAI

An increasing ubiquity of machine learning (ML) motivates research on al...
research
03/16/2023

WebSHAP: Towards Explaining Any Machine Learning Models Anywhere

As machine learning (ML) is increasingly integrated into our everyday We...
research
05/06/2022

The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations

Machine learning models in safety-critical settings like healthcare are ...
research
07/10/2020

Impact of Legal Requirements on Explainability in Machine Learning

The requirements on explainability imposed by European laws and their im...
research
06/27/2023

On Logic-Based Explainability with Partially Specified Inputs

In the practical deployment of machine learning (ML) models, missing dat...
research
09/14/2020

The Role of Individual User Differences in Interpretable and Explainable Machine Learning Systems

There is increased interest in assisting non-expert audiences to effecti...
research
07/11/2023

FairLay-ML: Intuitive Remedies for Unfairness in Data-Driven Social-Critical Algorithms

This thesis explores open-sourced machine learning (ML) model explanatio...

Please sign up or login with your details

Forgot password? Click here to reset