Counterfactual Adversarial Learning with Representation Interpolation

09/10/2021
by   Wei Wang, et al.
University of Illinois at Urbana-Champaign
0

Deep learning models exhibit a preference for statistical fitting over logical reasoning. Spurious correlations might be memorized when there exists statistical bias in training data, which severely limits the model performance especially in small data scenarios. In this work, we introduce Counterfactual Adversarial Training framework (CAT) to tackle the problem from a causality perspective. Particularly, for a specific sample, CAT first generates a counterfactual representation through latent space interpolation in an adversarial manner, and then performs Counterfactual Risk Minimization (CRM) on each original-counterfactual pair to adjust sample-wise loss weight dynamically, which encourages the model to explore the true causal effect. Extensive experiments demonstrate that CAT achieves substantial performance improvement over SOTA across different downstream tasks, including sentence classification, natural language inference and question answering.

READ FULL TEXT
05/27/2020

CausaLM: Causal Model Explanation Through Counterfactual Language Models

Understanding predictions made by deep neural networks is notoriously di...
06/06/2021

Empowering Language Understanding with Counterfactual Reasoning

Present language understanding methods have demonstrated extraordinary a...
06/10/2022

Adversarial Counterfactual Environment Model Learning

A good model for action-effect prediction, named environment model, is i...
10/13/2022

Counterfactual Multihop QA: A Cause-Effect Approach for Reducing Disconnected Reasoning

Multi-hop QA requires reasoning over multiple supporting facts to answer...
10/16/2022

Investigating the Robustness of Natural Language Generation from Logical Forms via Counterfactual Samples

The aim of Logic2Text is to generate controllable and faithful texts con...
04/20/2020

Learning What Makes a Difference from Counterfactual Examples and Gradient Supervision

One of the primary challenges limiting the applicability of deep learnin...
07/22/2022

CARBON: A Counterfactual Reasoning based Framework for Neural Code Comprehension Debiasing

Previous studies have demonstrated that code intelligence models are sen...

Code Repositories

CAT

This repository is for the paper Counterfactual Adversarial Learning with Representation Interpolation in Proceedings of EMNLP Findings 2021.


view repo

Please sign up or login with your details

Forgot password? Click here to reset