Hidden Schema Networks
Most modern language models infer representations that, albeit powerful, lack both compositionality and semantic interpretability. Starting from the assumption that a large proportion of semantic content is necessarily relational, we introduce a neural language model that discovers networks of symbols (schemata) from text datasets. Using a variational autoencoder (VAE) framework, our model encodes sentences into sequences of symbols (composed representation), which correspond to the nodes visited by biased random walkers on a global latent graph. Sentences are then generated back, conditioned on the selected symbol sequences. We first demonstrate that the model is able to uncover ground-truth graphs from artificially generated datasets of random token sequences. Next we leverage pretrained BERT and GPT-2 language models as encoder and decoder, respectively, to train our model on language modelling tasks. Qualitatively, our results show that the model is able to infer schema networks encoding different aspects of natural language. Quantitatively, the model achieves state-of-the-art scores on VAE language modeling benchmarks. Source code to reproduce our experiments is available at https://github.com/ramsesjsf/HiddenSchemaNetworks
READ FULL TEXT