Echoes of Biases: How Stigmatizing Language Affects AI Performance

05/17/2023
by   Yizhi Liu, et al.
0

Electronic health records (EHRs) serve as an essential data source for the envisioned artificial intelligence (AI)-driven transformation in healthcare. However, clinician biases reflected in EHR notes can lead to AI models inheriting and amplifying these biases, perpetuating health disparities. This study investigates the impact of stigmatizing language (SL) in EHR notes on mortality prediction using a Transformer-based deep learning model and explainable AI (XAI) techniques. Our findings demonstrate that SL written by clinicians adversely affects AI performance, particularly so for black patients, highlighting SL as a source of racial disparity in AI model development. To explore an operationally efficient way to mitigate SL's impact, we investigate patterns in the generation of SL through a clinicians' collaborative network, identifying central clinicians as having a stronger impact on racial disparity in the AI model. We find that removing SL written by central clinicians is a more efficient bias reduction strategy than eliminating all SL in the entire corpus of data. This study provides actionable insights for responsible AI development and contributes to understanding clinician behavior and EHR note writing in healthcare.

READ FULL TEXT

page 26

page 27

research
08/24/2021

Interpretable deep-learning models to help achieve the Sustainable Development Goals

We discuss our insights into interpretable artificial-intelligence (AI) ...
research
11/06/2020

Explainable AI meets Healthcare: A Study on Heart Disease Dataset

With the increasing availability of structured and unstructured data and...
research
01/05/2023

Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty

Artificial Intelligence (AI) increasingly becomes an indispensable advis...
research
05/19/2023

Improving Fairness in AI Models on Electronic Health Records: The Case for Federated Learning Methods

Developing AI tools that preserve fairness is of critical importance, sp...
research
06/30/2022

Why we do need Explainable AI for Healthcare

The recent spike in certified Artificial Intelligence (AI) tools for hea...
research
11/16/2021

Two-step adversarial debiasing with partial learning – medical image case-studies

The use of artificial intelligence (AI) in healthcare has become a very ...
research
12/15/2019

Artificial mental phenomena: Psychophysics as a framework to detect perception biases in AI models

Detecting biases in artificial intelligence has become difficult because...

Please sign up or login with your details

Forgot password? Click here to reset