Can large language models reason about medical questions?

by   Valentin Liévin, et al.

Although large language models (LLMs) often produce impressive outputs, they also fail to reason and be factual. We set out to investigate how these limitations affect the LLM's ability to answer and reason about difficult real-world based questions. We applied the human-aligned GPT-3 (InstructGPT) to answer multiple-choice medical exam questions (USMLE and MedMCQA) and medical research questions (PubMedQA). We investigated Chain-of-thought (think step by step) prompts, grounding (augmenting the prompt with search results) and few-shot (prepending the question with question-answer exemplars). For a subset of the USMLE questions, a medical domain expert reviewed and annotated the model's reasoning. Overall, GPT-3 achieved a substantial improvement in state-of-the-art machine learning performance. We observed that GPT-3 is often knowledgeable and can reason about medical questions. GPT-3, when confronted with a question it cannot answer, will still attempt to answer, often resulting in a biased predictive distribution. LLMs are not on par with human performance but our results suggest the emergence of reasoning patterns that are compatible with medical problem-solving. We speculate that scaling model and data, enhancing prompt alignment and allowing for better contextualization of the completions will be sufficient for LLMs to reach human-level performance on this type of task.


page 11

page 19

page 22

page 23

page 24

page 25

page 26

page 29


Performance of ChatGPT-3.5 and GPT-4 on the United States Medical Licensing Examination With and Without Distractions

As Large Language Models (LLMs) are predictive models building their res...

Benchmarking quantized LLaMa-based models on the Brazilian Secondary School Exam

Although Large Language Models (LLMs) represent a revolution in the way ...

What can we know about that which we cannot even imagine?

In this essay I will consider a sequence of questions, ending with one a...

Spoken Language Intelligence of Large Language Models for Language Learning

People have long hoped for a conversational system that can assist in re...

PROST: Physical Reasoning of Objects through Space and Time

We present a new probing dataset named PROST: Physical Reasoning about O...

RECKONING: Reasoning through Dynamic Knowledge Encoding

Recent studies on transformer-based language models show that they can a...

Opacity, Obscurity, and the Geometry of Question-Asking

Asking questions is a pervasive human activity, but little is understood...

Please sign up or login with your details

Forgot password? Click here to reset