Extending a Parser to Distant Domains Using a Few Dozen Partially Annotated Examples

05/16/2018
by   Vidur Joshi, et al.
0

We revisit domain adaptation for parsers in the neural era. First we show that recent advances in word representations greatly diminish the need for domain adaptation when the target domain is syntactically similar to the source domain. As evidence, we train a parser on the Wall Street Jour- nal alone that achieves over 90 domains, we provide a simple way to adapt a parser using only dozens of partial annotations. For instance, we increase the percentage of error-free geometry-domain parses in a held-out set from 45 five dozen training examples. In the process, we demon- strate a new state-of-the-art single model result on the Wall Street Journal test set of 94.3 of 92.6

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2020

Domain Adaptation for Semantic Parsing

Recently, semantic parsing has attracted much attention in the community...
research
03/14/2022

From Big to Small: Adaptive Learning to Partial-Set Domains

Domain adaptation targets at knowledge acquisition and dissemination fro...
research
07/10/2021

Few-Shot Domain Adaptation with Polymorphic Transformers

Deep neural networks (DNNs) trained on one set of medical images often e...
research
10/16/2020

SF-UDA^3D: Source-Free Unsupervised Domain Adaptation for LiDAR-Based 3D Object Detection

3D object detectors based only on LiDAR point clouds hold the state-of-t...
research
04/27/2023

Cross-Domain Evaluation of POS Taggers: From Wall Street Journal to Fandom Wiki

The Wall Street Journal section of the Penn Treebank has been the de-fac...
research
05/15/2022

Domain Adaptation in Multilingual and Multi-Domain Monolingual Settings for Complex Word Identification

Complex word identification (CWI) is a cornerstone process towards prope...

Please sign up or login with your details

Forgot password? Click here to reset