Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective

06/18/2023
by   Yan Zhuang, et al.
0

Large language models (LLMs), like ChatGPT, have shown some human-like cognitive abilities. For comparing these abilities of different models, several benchmarks (i.e. sets of standard test questions) from different fields (e.g., Literature, Biology and Psychology) are often adopted and the test results under traditional metrics such as accuracy, recall and F1, are reported. However, such way for evaluating LLMs can be inefficient and inaccurate from the cognitive science perspective. Inspired by Computerized Adaptive Testing (CAT) used in psychometrics, we propose an adaptive testing framework for LLM evaluation. Rather than using a standard test set and simply reporting accuracy, this approach dynamically adjusts the characteristics of the test questions, such as difficulty, based on the model's performance. This allows for a more accurate estimation of the model's abilities, using fewer questions. More importantly, it allows LLMs to be compared with humans easily, which is essential for NLP models that aim for human-level ability. Our diagnostic reports have found that ChatGPT often behaves like a “careless student”, prone to slip and occasionally guessing the questions. We conduct a fine-grained diagnosis and rank the latest 6 instruction-tuned LLMs from three aspects of Subject Knowledge, Mathematical Reasoning, and Programming, where GPT4 can outperform other models significantly and reach the cognitive ability of middle-level students. Different tests for different models using efficient adaptive testing – we believe this has the potential to become a new norm in evaluating large language models.

READ FULL TEXT

page 8

page 9

research
03/22/2023

Are LLMs the Master of All Trades? : Exploring Domain-Agnostic Reasoning Skills of LLMs

The potential of large language models (LLMs) to reason like humans has ...
research
06/27/2012

How To Grade a Test Without Knowing the Answers --- A Bayesian Graphical Model for Adaptive Crowdsourcing and Aptitude Testing

We propose a new probabilistic graphical model that jointly models the d...
research
05/30/2023

The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code

Causal reasoning, the ability to identify cause-and-effect relationship,...
research
05/28/2016

Building an Evaluation Scale using Item Response Theory

Evaluation of NLP methods requires testing against a previously vetted g...
research
09/05/2023

AGIBench: A Multi-granularity, Multimodal, Human-referenced, Auto-scoring Benchmark for Large Language Models

Large language models (LLMs) like ChatGPT have revealed amazing intellig...
research
09/07/2018

Model of Cognitive Dynamics Predicts Performance on Standardized Tests

In the modern knowledge economy, success demands sustained focus and hig...

Please sign up or login with your details

Forgot password? Click here to reset