Self-Consistent Narrative Prompts on Abductive Natural Language Inference
Abduction has long been seen as crucial for narrative comprehension and reasoning about everyday situations. The abductive natural language inference (αNLI) task has been proposed, and this narrative text-based task aims to infer the most plausible hypothesis from the candidates given two observations. However, the inter-sentential coherence and the model consistency have not been well exploited in the previous works on this task. In this work, we propose a prompt tuning model α-PACE, which takes self-consistency and inter-sentential coherence into consideration. Besides, we propose a general self-consistent framework that considers various narrative sequences (e.g., linear narrative and reverse chronology) for guiding the pre-trained language model in understanding the narrative context of input. We conduct extensive experiments and thorough ablation studies to illustrate the necessity and effectiveness of α-PACE. The performance of our method shows significant improvement against extensive competitive baselines.
READ FULL TEXT