DeepAI AI Chat
Log In Sign Up

Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks

by   Shicheng Xu, et al.
National University of Singapore
Institute of Computing Technology, Chinese Academy of Sciences

With the wide application of Large Language Models (LLMs) such as ChatGPT, how to make the contents generated by LLM accurate and credible becomes very important, especially in complex knowledge-intensive tasks. In this paper, we propose a novel framework called Search-in-the-Chain (SearChain) to improve the accuracy, credibility and traceability of LLM-generated content for multi-hop question answering, which is a typical complex knowledge-intensive task. SearChain is a framework that deeply integrates LLM and information retrieval (IR). In SearChain, LLM constructs a chain-of-query, which is the decomposition of the multi-hop question. Each node of the chain is a query-answer pair consisting of an IR-oriented query and the answer generated by LLM for this query. IR verifies, completes, and traces the information of each node of the chain, so as to guide LLM to construct the correct chain-of-query, and finally answer the multi-hop question. SearChain makes LLM change from trying to give a answer to trying to construct the chain-of-query when faced with the multi-hop question, which can stimulate the knowledge-reasoning ability and provides the interface for IR to be deeply involved in reasoning process of LLM. IR interacts with each node of chain-of-query of LLM. It verifies the information of the node and provides the unknown knowledge to LLM, which ensures the accuracy of the whole chain in the process of LLM generating the answer. Besides, the contents returned by LLM to the user include not only the final answer but also the reasoning process for the question, that is, the chain-of-query and the supporting documents retrieved by IR for each node of the chain, which improves the credibility and traceability of the contents generated by LLM. Experimental results show SearChain outperforms related baselines on four multi-hop question-answering datasets.


page 1

page 2

page 3

page 4


Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering

Multi-hop question answering (QA) requires an information retrieval (IR)...

Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks

Despite readily memorizing world knowledge about entities, pre-trained l...

Exploring the Integration Strategies of Retriever and Large Language Models

The integration of retrieved passages and large language models (LLMs), ...

Neural Retriever and Go Beyond: A Thesis Proposal

Information Retriever (IR) aims to find the relevant documents (e.g. sni...

Generative Context Pair Selection for Multi-hop Question Answering

Compositional reasoning tasks like multi-hop question answering, require...

Think-on-Graph: Deep and Responsible Reasoning of Large Language Model with Knowledge Graph

Large language models (LLMs) have made significant strides in various ta...

Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework

As large language models (LLMs) have become the norm in NLP, demonstrati...