Interactive Evaluation of Dialog Track at DSTC9

07/28/2022
by   Shikib Mehri, et al.
0

The ultimate goal of dialog research is to develop systems that can be effectively used in interactive settings by real users. To this end, we introduced the Interactive Evaluation of Dialog Track at the 9th Dialog System Technology Challenge. This track consisted of two sub-tasks. The first sub-task involved building knowledge-grounded response generation models. The second sub-task aimed to extend dialog models beyond static datasets by assessing them in an interactive setting with real users. Our track challenges participants to develop strong response generation models and explore strategies that extend them to back-and-forth interactions with real users. The progression from static corpora to interactive evaluation introduces unique challenges and facilitates a more thorough assessment of open-domain dialog systems. This paper provides an overview of the track, including the methodology and results. Furthermore, it provides insights into how to best evaluate open-domain dialog models

READ FULL TEXT
research
11/12/2020

Overview of the Ninth Dialog System Technology Challenge: DSTC9

This paper introduces the Ninth Dialog System Technology Challenge (DSTC...
research
11/14/2019

The Eighth Dialog System Technology Challenge

This paper introduces the Eighth Dialog System Technology Challenge. In ...
research
01/11/2019

Dialog System Technology Challenge 7

This paper introduces the Seventh Dialog System Technology Challenges (D...
research
10/20/2022

Doc2Bot: Accessing Heterogeneous Documents via Conversational Bots

This paper introduces Doc2Bot, a novel dataset for building machines tha...
research
02/02/2022

The slurk Interaction Server Framework: Better Data for Better Dialog Models

This paper presents the slurk software, a lightweight interaction server...
research
12/04/2021

Controllable Response Generation for Assistive Use-cases

Conversational agents have become an integral part of the general popula...
research
09/01/2020

Towards Evaluating Exploratory Model Building Process with AutoML Systems

The use of Automated Machine Learning (AutoML) systems are highly open-e...

Please sign up or login with your details

Forgot password? Click here to reset