Interpretable Unified Language Checking

04/07/2023
by   Tianhua Zhang, et al.
0

Despite recent concerns about undesirable behaviors generated by large language models (LLMs), including non-factual, biased, and hateful language, we find LLMs are inherent multi-task language checkers based on their latent representations of natural and social knowledge. We present an interpretable, unified, language checking (UniLC) method for both human and machine-generated language that aims to check if language input is factual and fair. While fairness and fact-checking tasks have been handled separately with dedicated models, we find that LLMs can achieve high performance on a combination of fact-checking, stereotype detection, and hate speech detection tasks with a simple, few-shot, unified set of prompts. With the “1/2-shot” multi-task language checking method proposed in this work, the GPT3.5-turbo model outperforms fully supervised baselines on several language tasks. The simple approach and results suggest that based on strong latent knowledge representations, an LLM can be an adaptive and explainable tool for detecting misinformation, stereotypes, and hate speech.

READ FULL TEXT

page 1

page 8

research
03/17/2021

Towards Few-Shot Fact-Checking via Perplexity

Few-shot learning has drawn researchers' attention to overcome the probl...
research
05/24/2023

Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models

Fact-checking is an essential task in NLP that is commonly utilized for ...
research
04/13/2020

Generating Fact Checking Explanations

Most existing work on automated fact checking is concerned with predicti...
research
03/31/2016

Multi-task Recurrent Model for Speech and Speaker Recognition

Although highly correlated, speech and speaker recognition have been reg...
research
09/01/2023

FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Automated Fact-Checking

Automatic fact-checking plays a crucial role in combating the spread of ...
research
08/18/2022

Active PETs: Active Data Annotation Prioritisation for Few-Shot Claim Verification with Pattern Exploiting Training

To mitigate the impact of data scarcity on fact-checking systems, we foc...

Please sign up or login with your details

Forgot password? Click here to reset