Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks

04/28/2023
by   Shicheng Xu, et al.
0

With the wide application of Large Language Models (LLMs) such as ChatGPT, how to make the contents generated by LLM accurate and credible becomes very important, especially in complex knowledge-intensive tasks. In this paper, we propose a novel framework called Search-in-the-Chain (SearChain) to improve the accuracy, credibility and traceability of LLM-generated content for multi-hop question answering, which is a typical complex knowledge-intensive task. SearChain is a framework that deeply integrates LLM and information retrieval (IR). In SearChain, LLM constructs a chain-of-query, which is the decomposition of the multi-hop question. Each node of the chain is a query-answer pair consisting of an IR-oriented query and the answer generated by LLM for this query. IR verifies, completes, and traces the information of each node of the chain, so as to guide LLM to construct the correct chain-of-query, and finally answer the multi-hop question. SearChain makes LLM change from trying to give a answer to trying to construct the chain-of-query when faced with the multi-hop question, which can stimulate the knowledge-reasoning ability and provides the interface for IR to be deeply involved in reasoning process of LLM. IR interacts with each node of chain-of-query of LLM. It verifies the information of the node and provides the unknown knowledge to LLM, which ensures the accuracy of the whole chain in the process of LLM generating the answer. Besides, the contents returned by LLM to the user include not only the final answer but also the reasoning process for the question, that is, the chain-of-query and the supporting documents retrieved by IR for each node of the chain, which improves the credibility and traceability of the contents generated by LLM. Experimental results show SearChain outperforms related baselines on four multi-hop question-answering datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2019

Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering

Multi-hop question answering (QA) requires an information retrieval (IR)...
research
06/06/2023

Triggering Multi-Hop Reasoning for Question Answering in Language Models using Soft Prompts and Random Walks

Despite readily memorizing world knowledge about entities, pre-trained l...
research
08/24/2023

Exploring the Integration Strategies of Retriever and Large Language Models

The integration of retrieved passages and large language models (LLMs), ...
research
05/31/2022

Neural Retriever and Go Beyond: A Thesis Proposal

Information Retriever (IR) aims to find the relevant documents (e.g. sni...
research
04/18/2021

Generative Context Pair Selection for Multi-hop Question Answering

Compositional reasoning tasks like multi-hop question answering, require...
research
07/15/2023

Think-on-Graph: Deep and Responsible Reasoning of Large Language Model with Knowledge Graph

Large language models (LLMs) have made significant strides in various ta...
research
05/05/2023

Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework

As large language models (LLMs) have become the norm in NLP, demonstrati...

Please sign up or login with your details

Forgot password? Click here to reset