UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering

07/06/2023
by   Triet M. Thai, et al.
0

In recent years, artificial intelligence has played an important role in medicine and disease diagnosis, with many applications to be mentioned, one of which is Medical Visual Question Answering (MedVQA). By combining computer vision and natural language processing, MedVQA systems can assist experts in extracting relevant information from medical image based on a given question and providing precise diagnostic answers. The ImageCLEFmed-MEDVQA-GI-2023 challenge carried out visual question answering task in the gastrointestinal domain, which includes gastroscopy and colonoscopy images. Our team approached Task 1 of the challenge by proposing a multimodal learning method with image enhancement to improve the VQA performance on gastrointestinal images. The multimodal architecture is set up with BERT encoder and different pre-trained vision models based on convolutional neural network (CNN) and Transformer architecture for features extraction from question and endoscopy image. The result of this study highlights the dominance of Transformer-based vision models over the CNNs and demonstrates the effectiveness of the image enhancement process, with six out of the eight vision models achieving better F1-Score. Our best method, which takes advantages of BERT+BEiT fusion and image enhancement, achieves up to 87.25 development test set, while also producing good result on the private test set with accuracy of 82.01

READ FULL TEXT

page 5

page 7

page 8

page 9

research
03/22/2023

Integrating Image Features with Convolutional Sequence-to-sequence Network for Multilingual Visual Question Answering

Visual Question Answering (VQA) is a task that requires computers to giv...
research
10/27/2020

MMFT-BERT: Multimodal Fusion Transformer with BERT Encodings for Visual Question Answering

We present MMFT-BERT(MultiModal Fusion Transformer with BERT encodings),...
research
04/12/2019

Evaluating the Representational Hub of Language and Vision Models

The multimodal models used in the emerging field at the intersection of ...
research
02/23/2023

VLSP2022-EVJVQA Challenge: Multilingual Visual Question Answering

Visual Question Answering (VQA) is a challenging task of natural languag...
research
07/28/2023

BARTPhoBEiT: Pre-trained Sequence-to-Sequence and Image Transformers Models for Vietnamese Visual Question Answering

Visual Question Answering (VQA) is an intricate and demanding task that ...
research
08/14/2019

Fusion of Detected Objects in Text for Visual Question Answering

To advance models of multimodal context, we introduce a simple yet power...
research
02/25/2020

Exploring BERT Parameter Efficiency on the Stanford Question Answering Dataset v2.0

In this paper we explore the parameter efficiency of BERT arXiv:1810.048...

Please sign up or login with your details

Forgot password? Click here to reset