Unanimous Prediction for 100 Semantic Mappings

06/20/2016
by   Fereshte Khani, et al.
0

Can we train a system that, on any new input, either says "don't know" or makes a prediction that is guaranteed to be correct? We answer the question in the affirmative provided our model family is well-specified. Specifically, we introduce the unanimity principle: only predict when all models consistent with the training data predict the same output. We operationalize this principle for semantic parsing, the task of mapping utterances to logical forms. We develop a simple, efficient method that reasons over the infinite set of all consistent models by only checking two of the models. We prove that our method obtains 100 adversarial distribution. Empirically, we demonstrate the effectiveness of our approach on the standard GeoQuery dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset