The CARE Dataset for Affective Response Detection

01/28/2022
by   Jane A. Yu, et al.
0

Social media plays an increasing role in our communication with friends and family, and our consumption of information and entertainment. Hence, to design effective ranking functions for posts on social media, it would be useful to predict the affective response to a post (e.g., whether the user is likely to be humored, inspired, angered, informed). Similar to work on emotion recognition (which focuses on the affect of the publisher of the post), the traditional approach to recognizing affective response would involve an expensive investment in human annotation of training data. We introduce CARE_db, a dataset of 230k social media posts annotated according to 7 affective responses using the Common Affective Response Expression (CARE) method. The CARE method is a means of leveraging the signal that is present in comments that are posted in response to a post, providing high-precision evidence about the affective response of the readers to the post without human annotation. Unlike human annotation, the annotation process we describe here can be iterated upon to expand the coverage of the method, particularly for new affective responses. We present experiments that demonstrate that the CARE annotations compare favorably with crowd-sourced annotations. Finally, we use CARE_db to train competitive BERT-based models for predicting affective response as well as emotion detection, demonstrating the utility of the dataset for related tasks.

READ FULL TEXT
research
06/06/2023

Detecting Human Rights Violations on Social Media during Russia-Ukraine War

The present-day Russia-Ukraine military conflict has exposed the pivotal...
research
03/24/2023

Depression detection in social media posts using affective and social norm features

We propose a deep architecture for depression detection from social medi...
research
03/08/2018

Touch Your Heart: A Tone-aware Chatbot for Customer Care on Social Media

Chatbot has become an important solution to rapidly increasing customer ...
research
07/16/2019

Modeling Human Annotation Errors to Design Bias-Aware Systems for Social Stream Processing

High-quality human annotations are necessary to create effective machine...
research
03/17/2022

Towards Responsible Natural Language Annotation for the Varieties of Arabic

When building NLP models, there is a tendency to aim for broader coverag...
research
05/21/2023

ToxBuster: In-game Chat Toxicity Buster with BERT

Detecting toxicity in online spaces is challenging and an ever more pres...
research
09/15/2020

Dialogue Response Ranking Training with Large-Scale Human Feedback Data

Existing open-domain dialog models are generally trained to minimize the...

Please sign up or login with your details

Forgot password? Click here to reset