Warning Signs in Communicating the Machine Learning Detection Results of Misinformation with Individuals

by   Limeng Cui, et al.

With the prevalence of misinformation online, researchers have focused on developing various machine learning algorithms to detect fake news. However, users' perception of machine learning outcomes and related behaviors have been widely ignored. Hence, this paper proposed to bridge this gap by studying how to pass the detection results of machine learning to the users, and aid their decisions in handling misinformation. An online experiment was conducted, to evaluate the effect of the proposed machine learning warning sign against a control condition. We examined participants' detection and sharing of news. The data showed that warning sign's effects on participants' trust toward the fake news were not significant. However, we found that people's uncertainty about the authenticity of the news dropped with the presence of the machine learning warning sign. We also found that social media experience had effects on users' trust toward the fake news, and age and social media experience had effects on users' sharing decision. Therefore, the results indicate that there are many factors worth studying that affect people's trust in the news. Moreover, the warning sign in communicating machine learning detection results is different from ordinary warnings and needs more detailed research and design. These findings hold important implications for the design of machine learning warnings.


How Fake News Affect Trust in the Output of a Machine Learning System for News Curation

People are increasingly consuming news curated by machine learning (ML) ...

Perceptions of News Sharing and Fake News in Singapore

Fake news is a prevalent problem that can undermine citizen engagement a...

Probabilistic Social Learning Improves the Public's Detection of Misinformation

The digital spread of misinformation is one of the leading threats to de...

How are the people in the photos judged? Analysis of brain activity when assessing levels of trust and attractiveness

Trust is the foundation of every area of life. Without it, it is difficu...

Humans Are Easily Fooled by Digital Images

Digital images are ubiquitous in our modern lives, with uses ranging fro...

Disinformation Detection: A review of linguistic feature selection and classification models in news veracity assessments

Over the past couple of years, the topic of "fake news" and its influenc...

Sockpuppet Detection: a Telegram case study

In Online Social Networks (OSN) numerous are the cases in which users cr...

Please sign up or login with your details

Forgot password? Click here to reset