Chatbots put to the test in math and logic problems: A preliminary comparison and assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard

05/30/2023
by   Vagelis Plevris, et al.
0

A comparison between three chatbots which are based on large language models, namely ChatGPT-3.5, ChatGPT-4 and Google Bard is presented, focusing on their ability to give correct answers to mathematics and logic problems. In particular, we check their ability to Understand the problem at hand; Apply appropriate algorithms or methods for its solution; and Generate a coherent response and a correct answer. We use 30 questions that are clear, without any ambiguities, fully described with plain text only, and have a unique, well defined correct answer. The questions are divided into two sets of 15 each. The questions of Set A are 15 "Original" problems that cannot be found online, while Set B contains 15 "Published" problems that one can find online, usually with their solution. Each question is posed three times to each chatbot. The answers are recorded and discussed, highlighting their strengths and weaknesses. It has been found that for straightforward arithmetic, algebraic expressions, or basic logic puzzles, chatbots may provide accurate solutions, although not in every attempt. However, for more complex mathematical problems or advanced logic tasks, their answers, although written in a usually "convincing" way, may not be reliable. Consistency is also an issue, as many times a chatbot will provide conflicting answers when given the same question more than once. A comparative quantitative evaluation of the three chatbots is made through scoring their final answers based on correctness. It was found that ChatGPT-4 outperforms ChatGPT-3.5 in both sets of questions. Bard comes third in the original questions of Set A, behind the other two chatbots, while it has the best performance (first place) in the published questions of Set B. This is probably because Bard has direct access to the internet, in contrast to ChatGPT chatbots which do not have any communication with the outside world.

READ FULL TEXT

page 15

page 16

research
09/20/2023

Chain-of-Verification Reduces Hallucination in Large Language Models

Generation of plausible yet incorrect factual information, termed halluc...
research
05/24/2023

Mastering the ABCDs of Complex Questions: Answer-Based Claim Decomposition for Fine-grained Self-Evaluation

When answering complex questions, large language models (LLMs) may produ...
research
05/21/2020

Automated Question Answer medical model based on Deep Learning Technology

Artificial intelligence can now provide more solutions for different pro...
research
04/18/2021

GooAQ: Open Question Answering with Diverse Answer Types

While day-to-day questions come with a variety of answer types, the curr...
research
12/12/2022

Design and Evaluation of Crowd-sourcing Platforms Based on Users Confidence Judgments

Crowd-sourcing deals with solving problems by assigning them to a large ...
research
11/16/2021

Solving Linear Algebra by Program Synthesis

We solve MIT's Linear Algebra 18.06 course and Columbia University's Com...
research
06/01/2021

Automated Grading of Anatomical Objective Structured Practical Exams Using Decision Trees

An Objective Structured Practical Examination (OSPE) is an effective and...

Please sign up or login with your details

Forgot password? Click here to reset