The Conflict Between Explainable and Accountable Decision-Making Algorithms

by   Gabriel Lima, et al.

Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired. Even though these systems are currently deployed in high-stakes scenarios, many of them cannot explain their decisions. This limitation has prompted the Explainable Artificial Intelligence (XAI) initiative, which aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability. This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems. We suggest that XAI systems that provide post-hoc explanations could be seen as blameworthy agents, obscuring the responsibility of developers in the decision-making process. Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i.e., patients), due to a misguided perception that they have control over explainable algorithms. This conflict between explainability and accountability can be exacerbated if designers choose to use algorithms and patients as moral and legal scapegoats. We conclude with a set of recommendations for how to approach this tension in the socio-technical process of algorithmic decision-making and a defense of hard regulation to prevent designers from escaping responsibility.


page 1

page 2

page 3

page 4


Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support

In this paper, we argue for a paradigm shift from the current model of e...

Monetizing Explainable AI: A Double-edged Sword

Algorithms used by organizations increasingly wield power in society as ...

Predicting Court Decisions for Alimony: Avoiding Extra-legal Factors in Decision made by Judges and Not Understandable AI Models

The advent of machine learning techniques has made it possible to obtain...

Towards Grad-CAM Based Explainability in a Legal Text Processing Pipeline

Explainable AI(XAI)is a domain focused on providing interpretability and...

Mathematical decisions and non-causal elements of explainable AI

Recent conceptual discussion on the nature of the explainability of Arti...

Towards a Framework Combining Machine Ethics and Machine Explainability

We find ourselves surrounded by a rapidly increasing number of autonomou...

The Hidden Inconsistencies Introduced by Predictive Algorithms in Judicial Decision Making

Algorithms, from simple automation to machine learning, have been introd...

Please sign up or login with your details

Forgot password? Click here to reset