Robots Enact Malignant Stereotypes

07/23/2022
by   Andrew Hundt, et al.
1

Stereotypes, bias, and discrimination have been extensively documented in Machine Learning (ML) methods such as Computer Vision (CV) [18, 80], Natural Language Processing (NLP) [6], or both, in the case of large image and caption models such as OpenAI CLIP [14]. In this paper, we evaluate how ML bias manifests in robots that physically and autonomously act within the world. We audit one of several recently published CLIP-powered robotic manipulation methods, presenting it with objects that have pictures of human faces on the surface which vary across race and gender, alongside task descriptions that contain terms associated with common stereotypes. Our experiments definitively show robots acting out toxic stereotypes with respect to gender, race, and scientifically-discredited physiognomy, at scale. Furthermore, the audited methods are less likely to recognize Women and People of Color. Our interdisciplinary sociotechnical analysis synthesizes across fields and applications such as Science Technology and Society (STS), Critical Studies, History, Safety, Robotics, and AI. We find that robots powered by large datasets and Dissolution Models (sometimes called "foundation models", e.g. CLIP) that contain humans risk physically amplifying malignant stereotypes in general; and that merely correcting disparities will be insufficient for the complexity and scale of the problem. Instead, we recommend that robot learning methods that physically manifest stereotypes or other harmful outcomes be paused, reworked, or even wound down when appropriate, until outcomes can be proven safe, effective, and just. Finally, we discuss comprehensive policy changes and the potential of new interdisciplinary research on topics like Identity Safety Assessment Frameworks and Design Justice to better understand and address these harms.

READ FULL TEXT

page 2

page 11

page 12

page 21

page 23

page 30

research
06/21/2019

Mitigating Gender Bias in Natural Language Processing: Literature Review

As Natural Language Processing (NLP) and Machine Learning (ML) tools ris...
research
10/01/2021

Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens

Much of the world's population experiences some form of disability durin...
research
01/21/2022

Gender Bias in Text: Labeled Datasets and Lexicons

Language has a profound impact on our thoughts, perceptions, and concept...
research
04/17/2022

Using HCI to Tackle Race and Gender Bias in ADHD Diagnosis

Attention Deficit Hyperactivity Disorder (ADHD) is a behavioral disorder...
research
04/17/2022

QTBIPOC PD: Exploring the Intersections of Race, Gender, and Sexual Orientation in Participatory Design

As Human-Computer Interaction (HCI) research aims to be inclusive and re...
research
01/14/2020

Robot Rights? Let's Talk about Human Welfare Instead

The 'robot rights' debate, and its related question of 'robot responsibi...
research
06/01/2023

Low Voltage Electrohydraulic Actuators for Untethered Robotics

Rigid robots can be precise in repetitive tasks but struggle in unstruct...

Please sign up or login with your details

Forgot password? Click here to reset