Long Text Generation Challenge

06/04/2023
by   Nikolay Mikhaylovskiy, et al.
0

We propose a shared task of human-like long text generation, LTG Challenge, that asks models to output a consistent human-like long text (a Harry Potter generic audience fanfic in English), given a prompt of about 1000 tokens. We suggest a novel statistical metric of the text structuredness, GloVe Autocorrelations Power/ Exponential Law Mean Absolute Percentage Error Ratio (GAPELMAPER) and a human evaluation protocol. We hope that LTG can open new avenues for researchers to investigate sampling approaches, prompting strategies, autoregressive and non-autoregressive text generation architectures and break the barrier to generate consistent long (40K+ token) texts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro