Discovering Latent Concepts Learned in BERT

by   Fahim Dalvi, et al.

A large number of studies that analyze deep neural network models and their ability to encode various linguistic and non-linguistic concepts provide an interpretation of the inner mechanics of these models. The scope of the analyses is limited to pre-defined concepts that reinforce the traditional linguistic knowledge and do not reflect on how novel concepts are learned by the model. We address this limitation by discovering and analyzing latent concepts learned in neural network models in an unsupervised fashion and provide interpretations from the model's perspective. In this work, we study: i) what latent concepts exist in the pre-trained BERT model, ii) how the discovered latent concepts align or diverge from classical linguistic hierarchy and iii) how the latent concepts evolve across layers. Our findings show: i) a model learns novel concepts (e.g. animal categories and demographic groups), which do not strictly adhere to any pre-defined categorization (e.g. POS, semantic tags), ii) several latent concepts are based on multiple properties which may include semantics, syntax, and morphology, iii) the lower layers in the model dominate in learning shallow lexical concepts while the higher layers learn semantic relations and iv) the discovered latent concepts highlight potential biases learned in the model. We also release a novel BERT ConceptNet dataset (BCN) consisting of 174 concept labels and 1M annotated instances.


page 1

page 2

page 3

page 4


Analyzing Encoded Concepts in Transformer Language Models

We propose a novel framework ConceptX, to analyze how latent concepts ar...

On the Transformation of Latent Space in Fine-Tuned NLP Models

We study the evolution of latent space in fine-tuned NLP models. Differe...

NxPlain: Web-based Tool for Discovery of Latent Concepts

The proliferation of deep neural networks in various domains has seen an...

ConceptX: A Framework for Latent Concept Analysis

The opacity of deep neural networks remains a challenge in deploying sol...

On the Linguistic Representational Power of Neural Machine Translation Models

Despite the recent success of deep neural networks in natural language p...

Towards Visual Semantics

In Visual Semantics we study how humans build mental representations, i....

Discovering containment: from infants to machines

Current artificial learning systems can recognize thousands of visual ca...

Please sign up or login with your details

Forgot password? Click here to reset