Top-down Discourse Parsing via Sequence Labelling

02/03/2021
by   Fajri Koto, et al.
5

We introduce a top-down approach to discourse parsing that is conceptually simpler than its predecessors (Kobayashi et al., 2020; Zhang et al., 2020). By framing the task as a sequence labelling problem where the goal is to iteratively segment a document into individual discourse units, we are able to eliminate the decoder and reduce the search space for splitting points. We explore both traditional recurrent models and modern pre-trained transformer models for the task, and additionally introduce a novel dynamic oracle for top-down parsing. Based on the Full metric, our proposed LSTM model sets a new state-of-the-art for RST parsing.

READ FULL TEXT
research
06/30/2021

A Conditional Splitting Framework for Efficient Constituency Parsing

We introduce a generic seq2seq parsing framework that casts constituency...
research
05/23/2021

RST Parsing from Scratch

We introduce a novel top-down end-to-end formulation of document-level d...
research
03/15/2017

SyntaxNet Models for the CoNLL 2017 Shared Task

We describe a baseline dependency parsing system for the CoNLL2017 Share...
research
05/21/2023

A Deeper (Autoregressive) Approach to Non-Convergent Discourse Parsing

Online social platforms provide a bustling arena for information-sharing...
research
08/30/2018

Acquiring Annotated Data with Cross-lingual Explicitation for Implicit Discourse Relation Classification

Implicit discourse relation classification is one of the most challengin...
research
05/06/2020

A Top-Down Neural Architecture towards Text-Level Parsing of Discourse Rhetorical Structure

Due to its great importance in deep natural language understanding and v...
research
09/10/2018

Depth-bounding is effective: Improvements and evaluation of unsupervised PCFG induction

There have been several recent attempts to improve the accuracy of gramm...

Please sign up or login with your details

Forgot password? Click here to reset