Natural Language Inference Prompts for Zero-shot Emotion Classification in Text across Corpora

Within textual emotion classification, the set of relevant labels depends on the domain and application scenario and might not be known at the time of model development. This conflicts with the classical paradigm of supervised learning in which the labels need to be predefined. A solution to obtain a model with a flexible set of labels is to use the paradigm of zero-shot learning as a natural language inference task, which in addition adds the advantage of not needing any labeled training data. This raises the question how to prompt a natural language inference model for zero-shot learning emotion classification. Options for prompt formulations include the emotion name anger alone or the statement "This text expresses anger". With this paper, we analyze how sensitive a natural language inference-based zero-shot-learning classifier is to such changes to the prompt under consideration of the corpus: How carefully does the prompt need to be selected? We perform experiments on an established set of emotion datasets presenting different language registers according to different sources (tweets, events, blogs) with three natural language inference models and show that indeed the choice of a particular prompt formulation needs to fit to the corpus. We show that this challenge can be tackled with combinations of multiple prompts. Such ensemble is more robust across corpora than individual prompts and shows nearly the same performance as the individual best prompt for a particular corpus.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2018

Zero-Shot Anticipation for Instructional Activities

How can we teach a robot to predict what will happen next for an activit...
research
06/20/2019

Zero-shot Learning and Knowledge Transfer in Music Classification and Tagging

Music classification and tagging is conducted through categorical superv...
research
12/19/2022

Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations

Although large language models can be prompted for both zero- and few-sh...
research
03/29/2018

MemGEN: Memory is All You Need

We propose a new learning paradigm called Deep Memory. It has the potent...
research
09/18/2020

Learning Unseen Emotions from Gestures via Semantically-Conditioned Zero-Shot Perception with Adversarial Autoencoders

We present a novel generalized zero-shot algorithm to recognize perceive...
research
07/23/2023

Validation of a Zero-Shot Learning Natural Language Processing Tool for Data Abstraction from Unstructured Healthcare Data

Objectives: To describe the development and validation of a zero-shot le...
research
09/08/2021

Open Aspect Target Sentiment Classification with Natural Language Prompts

For many business applications, we often seek to analyze sentiments asso...

Please sign up or login with your details

Forgot password? Click here to reset