Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure

09/19/2018
by   Besmira Nushi, et al.
12

As machine learning systems move from computer-science laboratories into the open world, their accountability becomes a high priority problem. Accountability requires deep understanding of system behavior and its failures. Current evaluation methods such as single-score error metrics and confusion matrices provide aggregate views of system performance that hide important shortcomings. Understanding details about failures is important for identifying pathways for refinement, communicating the reliability of systems in different settings, and for specifying appropriate human oversight and engagement. Characterization of failures and shortcomings is particularly complex for systems composed of multiple machine learned components. For such systems, existing evaluation methods have limited expressiveness in describing and explaining the relationship among input content, the internal states of system components, and final output quality. We present Pandora, a set of hybrid human-machine methods and tools for describing and explaining system failures. Pandora leverages both human and system-generated observations to summarize conditions of system malfunction with respect to the input content and system architecture. We share results of a case study with a machine learning pipeline for image captioning that show how detailed performance views can be beneficial for analysis and debugging.

READ FULL TEXT
research
03/02/2023

Compensating for Sensing Failures via Delegation in Human-AI Hybrid Systems

Given an increasing prevalence of intelligent systems capable of autonom...
research
02/04/2018

Human Action Adverb Recognition: ADHA Dataset and A Three-Stream Hybrid Model

We introduce the first benchmark for a new problem --- recognizing human...
research
10/16/2018

Overoptimization Failures and Specification Gaming in Multi-agent Systems

Overoptimization failures in machine learning and AI can involve specifi...
research
07/20/2023

A Holistic Assessment of the Reliability of Machine Learning Systems

As machine learning (ML) systems increasingly permeate high-stakes setti...
research
07/26/2022

The Human in the Infinite Loop: A Case Study on Revealing and Explaining Human-AI Interaction Loop Failures

Interactive AI systems increasingly employ a human-in-the-loop strategy....
research
07/19/2022

Actionable and Interpretable Fault Localization for Recurring Failures in Online Service Systems

Fault localization is challenging in an online service system due to its...
research
07/31/2019

MetricsVis: A Visual Analytics System for Evaluating Employee Performance in Public Safety Agencies

Evaluating employee performance in organizations with varying workloads ...

Please sign up or login with your details

Forgot password? Click here to reset