Characterizing Human Explanation Strategies to Inform the Design of Explainable AI for Building Damage Assessment

11/04/2021
by   Donghoon Shin, et al.
8

Explainable AI (XAI) is a promising means of supporting human-AI collaborations for high-stakes visual detection tasks, such as damage detection tasks from satellite imageries, as fully-automated approaches are unlikely to be perfectly safe and reliable. However, most existing XAI techniques are not informed by the understandings of task-specific needs of humans for explanations. Thus, we took a first step toward understanding what forms of XAI humans require in damage detection tasks. We conducted an online crowdsourced study to understand how people explain their own assessments, when evaluating the severity of building damage based on satellite imagery. Through the study with 60 crowdworkers, we surfaced six major strategies that humans utilize to explain their visual damage assessments. We present implications of our findings for the design of XAI methods for such visual detection contexts, and discuss opportunities for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
08/06/2022

Multi-view deep learning for reliable post-disaster damage classification

This study aims to enable more reliable automated post-disaster building...
research
04/12/2020

Building Disaster Damage Assessment in Satellite Imagery with Multi-Temporal Fusion

Automatic change detection and disaster damage assessment are currently ...
research
06/21/2023

Rapid building damage assessment workflow: An implementation for the 2023 Rolling Fork, Mississippi tornado event

Rapid and accurate building damage assessments from high-resolution sate...
research
08/03/2022

DAHiTrA: Damage Assessment Using a Novel Hierarchical Transformer Architecture

This paper presents DAHiTrA, a novel deep-learning model with hierarchic...
research
04/10/2023

Explanation Strategies for Image Classification in Humans vs. Current Explainable AI

Explainable AI (XAI) methods provide explanations of AI models, but our ...
research
11/20/2020

Assessing out-of-domain generalization for robust building damage detection

An important step for limiting the negative impact of natural disasters ...
research
07/28/2022

Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables

In many real world contexts, successful human-AI collaboration requires ...

Please sign up or login with your details

Forgot password? Click here to reset