Local and non-local dependency learning and emergence of rule-like representations in speech data by Deep Convolutional Generative Adversarial Networks

09/27/2020
by   Gašper Beguš, et al.
0

This paper argues that training GANs on local and non-local dependencies in speech data offers insights into how deep neural networks discretize continuous data and how symbolic-like rule-based morphophonological processes emerge in a deep convolutional architecture. Acquisition of speech has recently been modeled as a dependency between latent space and data generated by GANs in Beguš (arXiv:2006.03965), who models learning of a simple local allophonic distribution. We extend this approach to test learning of local and non-local phonological processes that include approximations of morphological processes. We further parallel outputs of the model to results of a behavioral experiment where human subjects are trained on the data used for training the GAN network. Four main conclusions emerge: (i) the networks provide useful information for computational models of language acquisition even if trained on a comparatively small dataset of an artificial grammar learning experiment; (ii) local processes are easier to learn than non-local processes, which matches both behavioral data in human subjects and typology in the world's languages. This paper also proposes (iii) how we can actively observe the network's progress in learning and explore the effect of training steps on learning representations by keeping latent space constant across different training steps. Finally, this paper shows that (iv) the network learns to encode the presence of a prefix with a single latent variable; by interpolating this variable, we can actively observe the operation of a non-local phonological process. The proposed technique for retrieving learning representations has general implications for our understanding of how GANs discretize continuous speech data and suggests that rule-like generalizations in the training data are represented as an interaction between variables in the network's latent space.

READ FULL TEXT

page 11

page 13

page 17

research
06/06/2020

Generative Adversarial Phonology: Modeling unsupervised phonetic and phonological learning with neural networks

Training deep neural networks on well-understood dependencies in speech ...
research
09/13/2020

Identity-Based Patterns in Deep Convolutional Networks: Generative Adversarial Phonology and Reduplication

Identity-based patterns for which a computational model needs to output ...
research
05/21/2023

Exploring How Generative Adversarial Networks Learn Phonological Representations

This paper explores how Generative Adversarial Networks (GANs) learn rep...
research
11/10/2020

Artificial sound change: Language change and deep convolutional neural networks in iterative learning

This paper proposes a framework for modeling sound change that combines ...
research
10/27/2022

Articulation GAN: Unsupervised modeling of articulatory learning

Generative deep neural networks are widely used for speech synthesis, bu...
research
12/07/2022

Selector-Enhancer: Learning Dynamic Selection of Local and Non-local Attention Operation for Speech Enhancement

Attention mechanisms, such as local and non-local attention, play a fund...
research
07/21/2020

Interpolating GANs to Scaffold Autotelic Creativity

The latent space modeled by generative adversarial networks (GANs) repre...

Please sign up or login with your details

Forgot password? Click here to reset