Shaping representations through communication: community size effect in artificial learning systems

12/12/2019
by   Olivier Tieleman, et al.
Google
6

Motivated by theories of language and communication that explain why communities with large numbers of speakers have, on average, simpler languages with more regularity, we cast the representation learning problem in terms of learning to communicate. Our starting point sees the traditional autoencoder setup as a single encoder with a fixed decoder partner that must learn to communicate. Generalizing from there, we introduce community-based autoencoders in which multiple encoders and decoders collectively learn representations by being randomly paired up on successive training iterations. We find that increasing community sizes reduce idiosyncrasies in the learned codes, resulting in representations that better encode concept categories and correlate with human feature norms.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/20/2021

Learning to Improve Representations by Communicating About Perspectives

Effective latent representations need to capture abstract features of th...
08/31/2021

Sentence Bottleneck Autoencoders from Transformer Language Models

Representation learning for text via pretraining a language model on a l...
05/27/2022

Multimodal Masked Autoencoders Learn Transferable Representations

Building scalable models to learn from diverse, multimodal data remains ...
08/27/2021

Speech Representations and Phoneme Classification for Preserving the Endangered Language of Ladin

A vast majority of the world's 7,000 spoken languages are predicted to b...
04/12/2021

Distributed Learning Systems with First-order Methods

Scalable and efficient distributed learning is one of the main driving f...

Please sign up or login with your details

Forgot password? Click here to reset