Adversarial Training with Contrastive Learning in NLP

by   Daniela N. Rim, et al.

For years, adversarial training has been extensively studied in natural language processing (NLP) settings. The main goal is to make models robust so that similar inputs derive in semantically similar outcomes, which is not a trivial problem since there is no objective measure of semantic similarity in language. Previous works use an external pre-trained NLP model to tackle this challenge, introducing an extra training stage with huge memory consumption during training. However, the recent popular approach of contrastive learning in language processing hints a convenient way of obtaining such similarity restrictions. The main advantage of the contrastive learning approach is that it aims for similar data points to be mapped close to each other and further from different ones in the representation space. In this work, we propose adversarial training with contrastive learning (ATCL) to adversarially train a language processing task using the benefits of contrastive learning. The core idea is to make linear perturbations in the embedding space of the input via fast gradient methods (FGM) and train the model to keep the original and perturbed representations close via contrastive learning. In NLP experiments, we applied ATCL to language modeling and neural machine translation tasks. The results show not only an improvement in the quantitative (perplexity and BLEU) scores when compared to the baselines, but ATCL also achieves good qualitative results in the semantic level for both tasks without using a pre-trained model.


page 1

page 2

page 3

page 4


Simple Contrastive Representation Adversarial Learning for NLP Tasks

Self-supervised learning approach like contrastive learning is attached ...

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

Despite pre-trained language models have proven useful for learning high...

Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks

Experimental evidence indicates that simple models outperform complex de...

Disentangling Learnable and Memorizable Data via Contrastive Learning for Semantic Communications

Achieving artificially intelligent-native wireless networks is necessary...

Interpretable Adversarial Perturbation in Input Embedding Space for Text

Following great success in the image processing field, the idea of adver...

Robust Task-Oriented Dialogue Generation with Contrastive Pre-training and Adversarial Filtering

Data artifacts incentivize machine learning models to learn non-transfer...

Automatic coding of students' writing via Contrastive Representation Learning in the Wasserstein space

Qualitative analysis of verbal data is of central importance in the lear...

Please sign up or login with your details

Forgot password? Click here to reset