Zero-shot Query Reformulation for Conversational Search

by   Dayu Yang, et al.
University of Delaware

As the popularity of voice assistants continues to surge, conversational search has gained increased attention in Information Retrieval. However, data sparsity issues in conversational search significantly hinder the progress of supervised conversational search methods. Consequently, researchers are focusing more on zero-shot conversational search approaches. Nevertheless, existing zero-shot methods face three primary limitations: they are not universally applicable to all retrievers, their effectiveness lacks sufficient explainability, and they struggle to resolve common conversational ambiguities caused by omission. To address these limitations, we introduce a novel Zero-shot Query Reformulation (ZeQR) framework that reformulates queries based on previous dialogue contexts without requiring supervision from conversational search data. Specifically, our framework utilizes language models designed for machine reading comprehension tasks to explicitly resolve two common ambiguities: coreference and omission, in raw queries. In comparison to existing zero-shot methods, our approach is universally applicable to any retriever without additional adaptation or indexing. It also provides greater explainability and effectively enhances query intent understanding because ambiguities are explicitly and proactively resolved. Through extensive experiments on four TREC conversational datasets, we demonstrate the effectiveness of our method, which consistently outperforms state-of-the-art baselines.


page 1

page 2

page 3

page 4


Zero-shot Clarifying Question Generation for Conversational Search

A long-standing challenge for search and conversational assistants is qu...

Few-Shot Generative Conversational Query Rewriting

Conversational query rewriting aims to reformulate a concise conversatio...

WOAH: Preliminaries to Zero-shot Ontology Learning for Conversational Agents

The present paper presents the Weighted Ontology Approximation Heuristic...

Large Language Models Know Your Contextual Search Intent: A Prompting Framework for Conversational Search

In this paper, we present a prompting framework called LLMCS that levera...

EchoPrompt: Instructing the Model to Rephrase Queries for Improved In-context Learning

Large language models primarily rely on incontext learning to execute ta...

Zero-Shot Retrieval with Search Agents and Hybrid Environments

Learning to search is the task of building artificial agents that learn ...

Zero-Shot Adaptive Transfer for Conversational Language Understanding

Conversational agents such as Alexa and Google Assistant constantly need...

Please sign up or login with your details

Forgot password? Click here to reset