Can Pre-trained Language Models Interpret Similes as Smart as Human?

03/16/2022
by   Qianyu He, et al.
0

Simile interpretation is a crucial task in natural language processing. Nowadays, pre-trained language models (PLMs) have achieved state-of-the-art performance on many tasks. However, it remains under-explored whether PLMs can interpret similes or not. In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i.e., to let the PLMs infer the shared properties of similes. We construct our simile property probing datasets from both general textual corpora and human-designed questions, containing 1,633 examples covering seven main categories. Our empirical study based on the constructed datasets shows that PLMs can infer similes' shared properties while still underperforming humans. To bridge the gap with human performance, we additionally design a knowledge-enhanced training objective by incorporating the simile knowledge into PLMs via knowledge embedding methods. Our method results in a gain of 8.58 classification. The datasets and code are publicly available at https://github.com/Abbey4799/PLMs-Interpret-Simile.

READ FULL TEXT
research
10/22/2020

Language Models are Open Knowledge Graphs

This paper shows how to construct knowledge graphs (KGs) from pre-traine...
research
11/29/2020

Intrinsic Knowledge Evaluation on Chinese Language Models

Recent NLP tasks have benefited a lot from pre-trained language models (...
research
08/31/2021

It's not Rocket Science : Interpreting Figurative Language in Narratives

Figurative language is ubiquitous in English. Yet, the vast majority of ...
research
09/11/2023

An Empirical Study of NetOps Capability of Pre-Trained Large Language Models

Nowadays, the versatile capabilities of Pre-trained Large Language Model...
research
03/12/2021

Improving Authorship Verification using Linguistic Divergence

We propose an unsupervised solution to the Authorship Verification task ...
research
04/06/2022

Knowledge Infused Decoding

Pre-trained language models (LMs) have been shown to memorize a substant...
research
08/03/2023

Holy Grail 2.0: From Natural Language to Constraint Models

Twenty-seven years ago, E. Freuder highlighted that "Constraint programm...

Please sign up or login with your details

Forgot password? Click here to reset