Longitudinal Distance: Towards Accountable Instance Attribution

08/23/2021
by   Rosina O. Weber, et al.
0

Previous research in interpretable machine learning (IML) and explainable artificial intelligence (XAI) can be broadly categorized as either focusing on seeking interpretability in the agent's model (i.e., IML) or focusing on the context of the user in addition to the model (i.e., XAI). The former can be categorized as feature or instance attribution. Example- or sample-based methods such as those using or inspired by case-based reasoning (CBR) rely on various approaches to select instances that are not necessarily attributing instances responsible for an agent's decision. Furthermore, existing approaches have focused on interpretability and explainability but fall short when it comes to accountability. Inspired in case-based reasoning principles, this paper introduces a pseudo-metric we call Longitudinal distance and its use to attribute instances to a neural network agent's decision that can be potentially used to build accountable CBR agents.

READ FULL TEXT
research
03/15/2023

EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models

The expansion of explainable artificial intelligence as a field of resea...
research
02/16/2022

TimeREISE: Time-series Randomized Evolving Input Sample Explanation

Deep neural networks are one of the most successful classifiers across d...
research
07/07/2023

On Formal Feature Attribution and Its Approximation

Recent years have witnessed the widespread use of artificial intelligenc...
research
05/23/2020

Towards Analogy-Based Explanations in Machine Learning

Principles of analogical reasoning have recently been applied in the con...
research
04/09/2021

An Empirical Comparison of Instance Attribution Methods for NLP

Widespread adoption of deep models has motivated a pressing need for app...
research
08/24/2021

ProtoMIL: Multiple Instance Learning with Prototypical Parts for Fine-Grained Interpretability

Multiple Instance Learning (MIL) gains popularity in many real-life mach...
research
06/07/2022

Towards Explainable Social Agent Authoring tools: A case study on FAtiMA-Toolkit

The deployment of Socially Intelligent Agents (SIAs) in learning environ...

Please sign up or login with your details

Forgot password? Click here to reset