Transformer-based Language Models for Factoid Question Answering at BioASQ9b

09/15/2021
by   Urvashi Khanna, et al.
0

In this work, we describe our experiments and participating systems in the BioASQ Task 9b Phase B challenge of biomedical question answering. We have focused on finding the ideal answers and investigated multi-task fine-tuning and gradual unfreezing techniques on transformer-based language models. For factoid questions, our ALBERT-based systems ranked first in test batch 1 and fourth in test batch 2. Our DistilBERT systems outperformed the ALBERT variants in test batches 4 and 5 despite having 81 However, we observed that gradual unfreezing had no significant impact on the model's accuracy compared to standard fine-tuning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset