Improving accuracy of GPT-3/4 results on biomedical data using a retrieval-augmented language model

05/26/2023
by   David Soong, et al.
0

Large language models (LLMs) have made significant advancements in natural language processing (NLP). Broad corpora capture diverse patterns but can introduce irrelevance, while focused corpora enhance reliability by reducing misleading information. Training LLMs on focused corpora poses computational challenges. An alternative approach is to use a retrieval-augmentation (RetA) method tested in a specific domain. To evaluate LLM performance, OpenAI's GPT-3, GPT-4, Bing's Prometheus, and a custom RetA model were compared using 19 questions on diffuse large B-cell lymphoma (DLBCL) disease. Eight independent reviewers assessed responses based on accuracy, relevance, and readability (rated 1-3). The RetA model performed best in accuracy (12/19 3-point scores, total=47) and relevance (13/19, 50), followed by GPT-4 (8/19, 43; 11/19, 49). GPT-4 received the highest readability scores (17/19, 55), followed by GPT-3 (15/19, 53) and the RetA model (11/19, 47). Prometheus underperformed in accuracy (34), relevance (32), and readability (38). Both GPT-3.5 and GPT-4 had more hallucinations in all 19 responses compared to the RetA model and Prometheus. Hallucinations were mostly associated with non-existent references or fabricated efficacy data. These findings suggest that RetA models, supplemented with domain-specific corpora, may outperform general-purpose LLMs in accuracy and relevance within specific domains. However, this evaluation was limited to specific questions and metrics and may not capture challenges in semantic search and other NLP tasks. Further research will explore different LLM architectures, RetA methodologies, and evaluation methods to assess strengths and limitations more comprehensively.

READ FULL TEXT

page 9

page 10

page 11

research
07/09/2021

Benchmarking for Biomedical Natural Language Processing Tasks with a Domain Specific ALBERT

The availability of biomedical text data and advances in natural languag...
research
04/26/2023

Evaluation of GPT-3.5 and GPT-4 for supporting real-world information needs in healthcare delivery

Despite growing interest in using large language models (LLMs) in health...
research
04/19/2021

ELECTRAMed: a new pre-trained language representation model for biomedical NLP

The overwhelming amount of biomedical scientific texts calls for the dev...
research
10/16/2020

Detecting ESG topics using domain-specific language models and data augmentation approaches

Despite recent advances in deep learning-based language modelling, many ...
research
12/29/2022

Maximizing Use-Case Specificity through Precision Model Tuning

Language models have become increasingly popular in recent years for tas...
research
07/17/2020

Multi-Perspective Semantic Information Retrieval in the Biomedical Domain

Information Retrieval (IR) is the task of obtaining pieces of data (such...
research
04/05/2021

What's the best place for an AI conference, Vancouver or ______: Why completing comparative questions is difficult

Although large neural language models (LMs) like BERT can be finetuned t...

Please sign up or login with your details

Forgot password? Click here to reset