Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT

02/19/2023
by   Qihuang Zhong, et al.
25

Recently, ChatGPT has attracted great attention, as it can generate fluent and high-quality responses to human inquiries. Several prior studies have shown that ChatGPT attains remarkable generation ability compared with existing models. However, the quantitative analysis of ChatGPT's understanding ability has been given little attention. In this report, we explore the understanding ability of ChatGPT by evaluating it on the most popular GLUE benchmark, and comparing it with 4 representative fine-tuned BERT-style models. We find that: 1) ChatGPT falls short in handling paraphrase and similarity tasks; 2) ChatGPT outperforms all BERT models on inference tasks by a large margin; 3) ChatGPT achieves comparable performance compared with BERT on sentiment analysis and question-answering tasks. Additionally, by combining some advanced prompting strategies, we show that the understanding ability of ChatGPT can be further improved.

READ FULL TEXT
research
04/10/2023

Is ChatGPT a Good Sentiment Analyzer? A Preliminary Study

Recently, ChatGPT has drawn great attention from both the research commu...
research
10/14/2019

Whatcha lookin' at? DeepLIFTing BERT's Attention in Question Answering

There has been great success recently in tackling challenging NLP tasks ...
research
05/01/2020

When BERT Plays the Lottery, All Tickets Are Winning

Much of the recent success in NLP is due to the large Transformer-based ...
research
04/08/2020

Improving BERT with Self-Supervised Attention

One of the most popular paradigms of applying large, pre-trained NLP mod...
research
08/21/2019

Revealing the Dark Secrets of BERT

BERT-based architectures currently give state-of-the-art performance on ...
research
11/11/2019

Understanding BERT performance in propaganda analysis

In this paper, we describe our system used in the shared task for fine-g...
research
05/13/2021

Distilling BERT for low complexity network training

This paper studies the efficiency of transferring BERT learnings to low ...

Please sign up or login with your details

Forgot password? Click here to reset