Instruction Tuning with Lexicons for Zero-Shot Style Classification

05/24/2023
by   Ruohao Guo, et al.
0

Style is used to convey authors' intentions and attitudes. Despite the success of large pre-trained language models on style classification, prior work relies on fine-tuning with labeled examples. Prompting large language models to classify style without fine-tuning is challenging because language styles can be difficult to define. In this study, we investigate the effectiveness of style lexicons as a means for instructing language models how to identify new styles that are unseen during training. Our experiments show that lexicon-based instructions improve transfer zero-shot performance significantly. We will release our code and data.

READ FULL TEXT
research
09/08/2021

A Recipe For Arbitrary Text Style Transfer with Large Language Models

In this paper, we leverage large language models (LMs) to perform zero-s...
research
05/31/2023

Exploring Lottery Prompts for Pre-trained Language Models

Consistently scaling pre-trained language models (PLMs) imposes substant...
research
09/14/2023

Leveraging Contextual Information for Effective Entity Salience Detection

In text documents such as news articles, the content and key events usua...
research
05/23/2023

GrACE: Generation using Associated Code Edits

Developers expend a significant amount of time in editing code for a var...
research
12/30/2022

On the Inconsistencies of Conditionals Learned by Masked Language Models

Learning to predict masked tokens in a sequence has been shown to be a p...
research
05/08/2022

Context-Aware Abbreviation Expansion Using Large Language Models

Motivated by the need for accelerating text entry in augmentative and al...
research
08/29/2023

Multi-party Goal Tracking with LLMs: Comparing Pre-training, Fine-tuning, and Prompt Engineering

This paper evaluates the extent to which current Large Language Models (...

Please sign up or login with your details

Forgot password? Click here to reset