Prompt Learning for Few-Shot Dialogue State Tracking

01/15/2022
by   Yuting Yang, et al.
0

Collecting dialogue state labels, slots and values, for learning dialogue state tracking (DST) models can be costly, especially with the wide application of dialogue systems in new-rising domains. In this paper, we focus on how to learn a DST model efficiently with limited labeled data. We design a prompt learning framework for few-shot DST, which consists of two main components: value-based prompt and inverse prompt mechanism. This framework aims to utilize the language understanding and generation ability of pre-trained language models (PLM). First, we design value-based prompt functions to probe the DST-related knowledge from PLM, which do not rely on the known ontology of slots. Further, an inverse prompt mechanism is utilized to self-check the "prompted" knowledge and help the PLM understand the essence of DST task further. Experiments show that our model can generate unseen slots and outperforms existing state-of-the-art few-shot methods. It indicates that DST-related knowledge can be probed from PLM and utilized to address low-resource DST efficiently with the help of prompt learning.

READ FULL TEXT
research
03/16/2022

In-Context Learning for Few-Shot Dialogue State Tracking

Collecting and annotating task-oriented dialogues is time-consuming and ...
research
05/19/2019

Learning to Memorize in Neural Task-Oriented Dialogue Systems

In this thesis, we leverage the neural copy mechanism and memory-augment...
research
10/11/2022

CSS: Combining Self-training and Self-supervised Learning for Few-shot Dialogue State Tracking

Few-shot dialogue state tracking (DST) is a realistic problem that train...
research
05/20/2023

Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer

In real-world scenarios, labeled samples for dialogue summarization are ...
research
09/25/2020

MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems

In this paper, we propose Minimalist Transfer Learning (MinTL) to simpli...
research
11/17/2022

Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking

In dialogue state tracking (DST), labeling the dataset involves consider...
research
08/14/2023

Dialogue for Prompting: a Policy-Gradient-Based Discrete Prompt Optimization for Few-shot Learning

Prompt-based pre-trained language models (PLMs) paradigm have succeeded ...

Please sign up or login with your details

Forgot password? Click here to reset