Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection

12/31/2020
by   Bertie Vidgen, et al.
0

We present a first-of-its-kind large synthetic training dataset for online hate classification, created from scratch with trained annotators over multiple rounds of dynamic data collection. We provide a 40,623 example dataset with annotations for fine-grained labels, including a large number of challenging contrastive perturbation examples. Unusually for an abusive content dataset, it comprises 54 performance and robustness can be greatly improved using the dynamic data collection paradigm. The model error rate decreased across rounds, from 72.1 in the first round to 35.8 increasingly harder to trick – even though content become progressively more adversarial as annotators became more experienced. Hate speech detection is an important and subtle problem that is still very challenging for existing AI methods. We hope that the models, dataset and dynamic system that we present here will help improve current approaches, having a positive social impact.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset