Risk-graded Safety for Handling Medical Queries in Conversational AI

10/02/2022
by   Gavin Abercrombie, et al.
0

Conversational AI systems can engage in unsafe behaviour when handling users' medical queries that can have severe consequences and could even lead to deaths. Systems therefore need to be capable of both recognising the seriousness of medical inputs and producing responses with appropriate levels of risk. We create a corpus of human written English language medical queries and the responses of different types of systems. We label these with both crowdsourced and expert annotations. While individual crowdworkers may be unreliable at grading the seriousness of the prompts, their aggregated labels tend to agree with professional opinion to a greater extent on identifying the medical queries and recognising the risk types posed by the responses. Results of classification experiments suggest that, while these tasks can be automated, caution should be exercised, as errors can potentially be very serious.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/28/2023

Challenges of GPT-3-based Conversational Agents for Healthcare

The potential to provide patients with faster information access while a...
research
05/09/2023

Large Language Models Need Holistically Thought in Medical Conversational QA

The medical conversational question answering (CQA) system aims at provi...
research
06/04/2021

Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism in the Design and Perception of Conversational Assistants

Technology companies have produced varied responses to concerns about th...
research
05/07/2022

Vector Representations of Idioms in Conversational Systems

We demonstrate, in this study, that an open-domain conversational system...
research
12/29/2020

Can You be More Social? Injecting Politeness and Positivity into Task-Oriented Conversational Agents

Goal-oriented conversational agents are becoming prevalent in our daily ...
research
09/20/2021

ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Abuse Detection in Conversational AI

We present the first English corpus study on abusive language towards th...
research
06/13/2023

Adding guardrails to advanced chatbots

Generative AI models continue to become more powerful. The launch of Cha...

Please sign up or login with your details

Forgot password? Click here to reset