Ethically Collecting Multi-Modal Spontaneous Conversations with People that have Cognitive Impairments

09/30/2020
by   Angus Addlesee, et al.
0

In order to make spoken dialogue systems (such as Amazon Alexa or Google Assistant) more accessible and naturally interactive for people with cognitive impairments, appropriate data must be obtainable. Recordings of multi-modal spontaneous conversations with vulnerable user groups are scarce however and this valuable data is challenging to collect. Researchers that call for this data are commonly inexperienced in ethical and legal issues around working with vulnerable participants. Additionally, standard recording equipment is insecure and should not be used to capture sensitive data. We spent a year consulting experts on how to ethically capture and share recordings of multi-modal spontaneous conversations with vulnerable user groups. In this paper we provide guidance, collated from these experts, on how to ethically collect such data and we present a new system - "CUSCO" - to capture, transport and exchange sensitive data securely. This framework is intended to be easily followed and implemented to encourage further publications of similar corpora. Using this guide and secure recording system, researchers can review and refine their ethical measures.

READ FULL TEXT

page 2

page 3

page 4

research
05/24/2023

PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts

Perceiving multi-modal information and fulfilling dialogues with humans ...
research
09/03/2021

A Longitudinal Multi-modal Dataset for Dementia Monitoring and Diagnosis

Dementia is a family of neurogenerative conditions affecting memory and ...
research
04/21/2018

Multi-modal space structure: a new kind of latent correlation for multi-modal entity resolution

Multi-modal data is becoming more common than before because of big data...
research
12/27/2021

VibEmoji: Exploring User-authoring Multi-modal Emoticons in Social Communication

Emoticons are indispensable in online communications. With users' growin...
research
03/13/2021

EgoCom: A Multi-person Multi-modal Egocentric Communications Dataset

Multi-modal datasets in artificial intelligence (AI) often capture a thi...
research
04/06/2022

EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios

We present the Eyetracked Multi-Modal Translation (EMMT) corpus, a datas...

Please sign up or login with your details

Forgot password? Click here to reset