ExClaim: Explainable Neural Claim Verification Using Rationalization

by   Sai Gurrapu, et al.

With the advent of deep learning, text generation language models have improved dramatically, with text at a similar level as human-written text. This can lead to rampant misinformation because content can now be created cheaply and distributed quickly. Automated claim verification methods exist to validate claims, but they lack foundational data and often use mainstream news as evidence sources that are strongly biased towards a specific agenda. Current claim verification methods use deep neural network models and complex algorithms for a high classification accuracy but it is at the expense of model explainability. The models are black-boxes and their decision-making process and the steps it took to arrive at a final prediction are obfuscated from the user. We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence. Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim and justifies the verdict through a natural language explanation (rationale) to describe the model's decision-making process. ExClaim treats the verdict classification task as a question-answer problem and achieves a performance of 0.93 F1 score. It provides subtasks explanations to also justify the intermediate outcomes. Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes. Ensuring claim verification systems are assured, rational, and explainable is an essential step toward improving Human-AI trust and the accessibility of black-box systems.


DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification

Recently, many methods discover effective evidence from reliable sources...

Towards Explainable Fact Checking

The past decade has seen a substantial rise in the amount of mis- and di...

Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence

Typical fact verification models use retrieved written evidence to verif...

ProoFVer: Natural Logic Theorem Proving for Fact Verification

We propose ProoFVer, a proof system for fact verification using natural ...

Explaining the Deep Natural Language Processing by Mining Textual Interpretable Features

Despite the high accuracy offered by state-of-the-art deep natural-langu...

SIDU: Similarity Difference and Uniqueness Method for Explainable AI

A new brand of technical artificial intelligence ( Explainable AI ) rese...

Natural Language Deduction with Incomplete Information

A growing body of work studies how to answer a question or verify a clai...

Please sign up or login with your details

Forgot password? Click here to reset