ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate

08/14/2023
by   Chi-Min Chan, et al.
0

Text evaluation has historically posed significant challenges, often demanding substantial labor and time cost. With the emergence of large language models (LLMs), researchers have explored LLMs' potential as alternatives for human evaluation. While these single-agent-based approaches show promise, experimental results suggest that further advancements are needed to bridge the gap between their current effectiveness and human-level evaluation quality. Recognizing that best practices of human evaluation processes often involve multiple human annotators collaborating in the evaluation, we resort to a multi-agent debate framework, moving beyond single-agent prompting strategies. The multi-agent-based approach enables a group of LLMs to synergize with an array of intelligent counterparts, harnessing their distinct capabilities and expertise to enhance efficiency and effectiveness in handling intricate tasks. In this paper, we construct a multi-agent referee team called ChatEval to autonomously discuss and evaluate the quality of generated responses from different models on open-ended questions and traditional natural language generation (NLG) tasks. Our analysis shows that ChatEval transcends mere textual scoring, offering a human-mimicking evaluation process for reliable assessments. Our code is available at https://github.com/chanchimin/ChatEval.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2023

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents

Autonomous agents empowered by Large Language Models (LLMs) have undergo...
research
08/01/2023

MetaGPT: Meta Programming for Multi-Agent Collaborative Framework

Recently, remarkable progress has been made in automated task-solving th...
research
04/07/2012

A new approach of designing Multi-Agent Systems

Agent technology is a software paradigm that permits to implement large ...
research
12/15/2021

Dynamic Human Evaluation for Relative Model Comparisons

Collecting human judgements is currently the most reliable evaluation me...
research
12/22/2022

Certified Policy Smoothing for Cooperative Multi-Agent Reinforcement Learning

Cooperative multi-agent reinforcement learning (c-MARL) is widely applie...
research
08/21/2023

Neural Amortized Inference for Nested Multi-agent Reasoning

Multi-agent interactions, such as communication, teaching, and bluffing,...
research
02/21/2022

A Multi-Agent Reinforcement Learning Framework for Off-Policy Evaluation in Two-sided Markets

The two-sided markets such as ride-sharing companies often involve a gro...

Please sign up or login with your details

Forgot password? Click here to reset