Emergence of Separable Manifolds in Deep Language Representations

by   Jonathan Mamou, et al.

Deep neural networks (DNNs) have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representations extracted from task-optimized DNNs and neural populations in the brain. DNNs have subsequently become a popular model class to infer computational principles underlying complex cognitive functions, and in turn, they have also emerged as a natural testbed for applying methods originally developed to probe information in neural populations. In this work, we utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience that connects geometry of feature representations with linear separability of classes, to analyze language representations from large-scale contextual embedding models. We explore representations from different model families (BERT, RoBERTa, GPT, etc.) and find evidence for emergence of linguistic manifolds across layer depth (e.g., manifolds for part-of-speech tags), especially in ambiguous data (i.e, words with multiple part-of-speech tags, or part-of-speech classes including many words). In addition, we find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds' radius, dimensionality and inter-manifold correlations.


Analyzing biological and artificial neural networks: challenges with opportunities for synergy?

Deep neural networks (DNNs) transform stimuli across multiple processing...

Statistical Mechanics of Neural Processing of Object Manifolds

Invariant object recognition is one of the most fundamental cognitive ta...

Bayesian Inference on Matrix Manifolds for Linear Dimensionality Reduction

We reframe linear dimensionality reduction as a problem of Bayesian infe...

Manifold Learning Benefits GANs

In this paper, we improve Generative Adversarial Networks by incorporati...

Neural (Tangent Kernel) Collapse

This work bridges two important concepts: the Neural Tangent Kernel (NTK...

The curious case of developmental BERTology: On sparsity, transfer learning, generalization and the brain

In this essay, we explore a point of intersection between deep learning ...

Curvature-Balanced Feature Manifold Learning for Long-Tailed Classification

To address the challenges of long-tailed classification, researchers hav...

Please sign up or login with your details

Forgot password? Click here to reset