Enforcing Constraints on Outputs with Unconstrained Inference

07/26/2017
by   Jay Yoon Lee, et al.
0

Increasingly, practitioners apply neural networks to complex problems in natural language processing (NLP), such as syntactic parsing, that have rich output structures. Many such applications require deterministic constraints on the output values; for example, requiring that the sequential outputs encode a valid tree. While hidden units might capture such properties, the network is not always able to learn them from the training data alone, and practitioners must then resort to post-processing. In this paper, we present an inference method for neural networks that enforces deterministic constraints on outputs without performing post-processing or expensive discrete search over the feasible space. Instead, for each input, we nudge the continuous weights until the network's unconstrained inference procedure generates an output that satisfies the constraints. We find that our method reduces the number of violating outputs by up to 94 parsing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2020

Learning Constraints for Structured Prediction Using Rectifier Networks

Various natural language processing tasks are structured prediction prob...
research
06/29/2020

Simplifying Models with Unlabeled Output Data

We focus on prediction problems with high-dimensional outputs that are s...
research
10/09/2020

Bias and Variance of Post-processing in Differential Privacy

Post-processing immunity is a fundamental property of differential priva...
research
11/09/2019

Learning to Copy for Automatic Post-Editing

Automatic post-editing (APE), which aims to correct errors in the output...
research
05/15/2019

Output-Constrained Bayesian Neural Networks

Bayesian neural network (BNN) priors are defined in parameter space, mak...
research
12/08/2017

Artificial Neural Networks that Learn to Satisfy Logic Constraints

Logic-based problems such as planning, theorem proving, or puzzles, typi...
research
04/20/2023

Multi-aspect Repetition Suppression and Content Moderation of Large Language Models

Natural language generation is one of the most impactful fields in NLP, ...

Please sign up or login with your details

Forgot password? Click here to reset