Uniform Complexity for Text Generation

04/11/2022
by   Joseph Marvin Imperial, et al.
5

Powerful language models such as GPT-2 have shown promising results in tasks such as narrative generation which can be useful in an educational setup. These models, however, should be consistent with the linguistic properties of triggers used. For example, if the reading level of an input text prompt is appropriate for low-leveled learners (ex. A2 in the CEFR), then the generated continuation should also assume this particular level. Thus, we propose the task of uniform complexity for text generation which serves as a call to make existing language generators uniformly complex with respect to prompts used. Our study surveyed over 160 linguistic properties for evaluating text complexity and found out that both humans and GPT-2 models struggle in preserving the complexity of prompts in a narrative generation setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset