Towards Detection of Subjective Bias using Contextualized Word Embeddings

02/16/2020
by   Tanvi Dadu, et al.
0

Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of 360k labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like BERT_large by a margin of 5.6 F1 score.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset