Attention-Guided Answer Distillation for Machine Reading Comprehension

08/23/2018
by   Minghao Hu, et al.
0

Despite that current reading comprehension systems have achieved significant advancements, their promising performances are often obtained at the cost of making an ensemble of numerous models. Besides, existing approaches are also vulnerable to adversarial attacks. This paper tackles these problems by leveraging knowledge distillation, which aims to transfer knowledge from an ensemble model to a single model. We first demonstrate that vanilla knowledge distillation applied to answer span prediction is effective for reading comprehension systems. We then propose two novel approaches that not only penalize the prediction on confusing answers but also guide the training with alignment information distilled from the ensemble. Experiments show that our best student model has only a slight drop of 0.4 compared to the ensemble teacher, while running 12x faster during inference. It even outperforms the teacher on adversarial SQuAD datasets and NarrativeQA benchmark.

READ FULL TEXT
research
03/29/2019

Making Neural Machine Reading Comprehension Faster

This study aims at solving the Machine Reading Comprehension problem whe...
research
10/24/2020

Improved Synthetic Training for Reading Comprehension

Automatically generated synthetic training examples have been shown to i...
research
07/04/2021

Audio-Oriented Multimodal Machine Comprehension: Task, Dataset and Model

While Machine Comprehension (MC) has attracted extensive research intere...
research
10/02/2019

AntMan: Sparse Low-Rank Compression to Accelerate RNN inference

Wide adoption of complex RNN based models is hindered by their inference...
research
10/27/2020

Cross-lingual Machine Reading Comprehension with Language Branch Knowledge Distillation

Cross-lingual Machine Reading Comprehension (CLMRC) remains a challengin...
research
07/18/2023

Teach model to answer questions after comprehending the document

Multi-choice Machine Reading Comprehension (MRC) is a challenging extens...
research
02/15/2020

Undersensitivity in Neural Reading Comprehension

Current reading comprehension models generalise well to in-distribution ...

Please sign up or login with your details

Forgot password? Click here to reset