Compositional Sequence Labeling Models for Error Detection in Learner Writing

07/20/2016
by   Marek Rei, et al.
0

In this paper, we present the first experiments using neural network models for the task of error detection in learner writing. We perform a systematic comparison of alternative compositional architectures and propose a framework for error detection based on bidirectional LSTMs. Experiments on the CoNLL-14 shared task dataset show the model is able to outperform other participants on detecting errors in learner writing. Finally, the model is integrated with a publicly deployed self-assessment system, leading to performance comparable to human annotators.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2017

Auxiliary Objectives for Neural Error Detection Models

We investigate the utility of different auxiliary objectives and trainin...
research
06/09/2016

Sentence Similarity Measures for Fine-Grained Estimation of Topical Relevance in Learner Essays

We investigate the task of assessing sentence-level prompt relevance in ...
research
01/08/2019

Choosing the Right Word: Using Bidirectional LSTM Tagger for Writing Support Systems

Scientific writing is difficult. It is even harder for those for whom En...
research
07/08/2020

The Scattering Compositional Learner: Discovering Objects, Attributes, Relationships in Analogical Reasoning

In this work, we focus on an analogical reasoning task that contains ric...
research
09/09/2021

'1e0a': A Computational Approach to Rhythm Training

We present a computational assessment system that promotes the learning ...
research
09/05/2019

TEASPN: Framework and Protocol for Integrated Writing Assistance Environments

Language technologies play a key role in assisting people with their wri...
research
06/15/2019

Context is Key: Grammatical Error Detection with Contextual Word Representations

Grammatical error detection (GED) in non-native writing requires systems...

Please sign up or login with your details

Forgot password? Click here to reset