Perhaps PTLMs Should Go to School – A Task to Assess Open Book and Closed Book QA

10/04/2021
by   Manuel R. Ciosici, et al.
0

Our goal is to deliver a new task and leaderboard to stimulate research on question answering and pre-trained language models (PTLMs) to understand a significant instructional document, e.g., an introductory college textbook or a manual. PTLMs have shown great success in many question-answering tasks, given significant supervised training, but much less so in zero-shot settings. We propose a new task that includes two college-level introductory texts in the social sciences (American Government 2e) and humanities (U.S. History), hundreds of true/false statements based on review questions written by the textbook authors, validation/development tests based on the first eight chapters of the textbooks, blind tests based on the remaining textbook chapters, and baseline results given state-of-the-art PTLMs. Since the questions are balanced, random performance should be  50 BoolQ achieves the same performance, suggesting that the textbook's content is not pre-represented in the PTLM. Taking the exam closed book, but having read the textbook (i.e., adding the textbook to T5's pre-training), yields at best minor improvement (56 textbook (or perhaps misunderstood the questions). Performance is better ( 60 when the exam is taken open-book (i.e., allowing the machine to automatically retrieve a paragraph and use it to answer the question).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset