Context-Aware Abbreviation Expansion Using Large Language Models

05/08/2022
by   Shanqing Cai, et al.
10

Motivated by the need for accelerating text entry in augmentative and alternative communication (AAC) for people with severe motor impairments, we propose a paradigm in which phrases are abbreviated aggressively as primarily word-initial letters. Our approach is to expand the abbreviations into full-phrase options by leveraging conversation context with the power of pretrained large language models (LLMs). Through zero-shot, few-shot, and fine-tuning experiments on four public conversation datasets, we show that for replies to the initial turn of a dialog, an LLM with 64B parameters is able to exactly expand over 70 to an effective keystroke saving rate of up to about 77 expansions. Including a small amount of context in the form of a single conversation turn more than doubles abbreviation expansion accuracies compared to having no context, an effect that is more pronounced for longer phrases. Additionally, the robustness of models against typo noise can be enhanced through fine-tuning on noisy data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2023

Instruction Tuning with Lexicons for Zero-Shot Style Classification

Style is used to convey authors' intentions and attitudes. Despite the s...
research
05/28/2023

Transfer Learning for Power Outage Detection Task with Limited Training Data

Early detection of power outages is crucial for maintaining a reliable p...
research
09/14/2023

PerPLM: Personalized Fine-tuning of Pretrained Language Models via Writer-specific Intermediate Learning and Prompts

The meanings of words and phrases depend not only on where they are used...
research
11/29/2022

Context-Aware Robust Fine-Tuning

Contrastive Language-Image Pre-trained (CLIP) models have zero-shot abil...
research
09/20/2023

A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models

Generative Large Language Models (LLMs) have achieved remarkable advance...
research
09/22/2022

Prompting for a conversation: How to control a dialog model?

Dialog modelling faces a difficult trade-off. Models are trained on a la...
research
12/07/2022

Towards using Few-Shot Prompt Learning for Automating Model Completion

We propose a simple yet a novel approach to improve completion in domain...

Please sign up or login with your details

Forgot password? Click here to reset