Towards Computationally Verifiable Semantic Grounding for Language Models

11/16/2022
by   Chris Alberti, et al.
0

The paper presents an approach to semantic grounding of language models (LMs) that conceptualizes the LM as a conditional model generating text given a desired semantic message formalized as a set of entity-relationship triples. It embeds the LM in an auto-encoder by feeding its output to a semantic parser whose output is in the same representation domain as the input message. Compared to a baseline that generates text using greedy search, we demonstrate two techniques that improve the fluency and semantic accuracy of the generated text: The first technique samples multiple candidate text sequences from which the semantic parser chooses. The second trains the language model while keeping the semantic parser frozen to improve the semantic accuracy of the auto-encoder. We carry out experiments on the English WebNLG 3.0 data set, using BLEU to measure the fluency of generated text and standard parsing metrics to measure semantic accuracy. We show that our proposed approaches significantly improve on the greedy search baseline. Human evaluation corroborates the results of the automatic evaluation experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/18/2016

Dependency Language Models for Transition-based Dependency Parsing

In this paper, we present an approach to improve the accuracy of a stron...
research
05/19/2022

RankGen: Improving Text Generation with Large Ranking Models

Given an input sequence (or prefix), modern language models often assign...
research
06/21/2022

BenchCLAMP: A Benchmark for Evaluating Language Models on Semantic Parsing

We introduce BenchCLAMP, a Benchmark to evaluate Constrained LAnguage Mo...
research
04/21/2018

Unsupervised Natural Language Generation with Denoising Autoencoders

Generating text from structured data is important for various tasks such...
research
07/15/2022

Probing Semantic Grounding in Language Models of Code with Representational Similarity Analysis

Representational Similarity Analysis is a method from cognitive neurosci...
research
09/22/2021

Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing

Recent years pretrained language models (PLMs) hit a success on several ...
research
01/25/2023

Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)

While large language models (LLMs) have demonstrated strong capability i...

Please sign up or login with your details

Forgot password? Click here to reset