Analysing the potential of seq-to-seq models for incremental interpretation in task-oriented dialogue

08/28/2018
by   Dieuwke Hupkes, et al.
0

We investigate how encoder-decoder models trained on a synthetic dataset of task-oriented dialogues process disfluencies, such as hesitations and self-corrections. We find that, contrary to earlier results, disfluencies have very little impact on the task success of seq-to-seq models with attention. Using visualisation and diagnostic classifiers, we analyse the representations that are incrementally built by the model, and discover that models develop little to no awareness of the structure of disfluencies. However, adding disfluencies to the data appears to help the model create clearer representations overall, as evidenced by the attention patterns the different models exhibit.

READ FULL TEXT
research
06/26/2017

Generative Encoder-Decoder Models for Task-Oriented Spoken Dialog Systems with Chatting Capability

Generative encoder-decoder models offer great promise in developing doma...
research
09/11/2019

Self-Attentional Models Application in Task-Oriented Dialogue Generation Systems

Self-attentional models are a new paradigm for sequence modelling tasks ...
research
09/22/2017

Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena

Natural, spontaneous dialogue proceeds incrementally on a word-by-word b...
research
09/30/2019

Retrieval-based Goal-Oriented Dialogue Generation

Most research on dialogue has focused either on dialogue generation for ...
research
04/14/2022

Can Visual Dialogue Models Do Scorekeeping? Exploring How Dialogue Representations Incrementally Encode Shared Knowledge

Cognitively plausible visual dialogue models should keep a mental scoreb...
research
03/20/2021

The Interplay of Task Success and Dialogue Quality: An in-depth Evaluation in Task-Oriented Visual Dialogues

When training a model on referential dialogue guessing games, the best m...
research
06/07/2019

Assessing incrementality in sequence-to-sequence models

Since their inception, encoder-decoder models have successfully been app...

Please sign up or login with your details

Forgot password? Click here to reset