DeepAI AI Chat
Log In Sign Up

The Unstoppable Rise of Computational Linguistics in Deep Learning

by   James Henderson, et al.
Idiap Research Institute

In this paper, we trace the history of neural networks applied to natural language understanding tasks, and identify key contributions which the nature of language has made to the development of neural network architectures. We focus on the importance of variable binding and its instantiation in attention-based models, and argue that Transformer is not a sequence model but an induced-structure model. This perspective leads to predictions of the challenges facing research in deep learning architectures for natural language understanding.


page 1

page 2

page 3

page 4


Natural Language Understanding with the Quora Question Pairs Dataset

This paper explores the task Natural Language Understanding (NLU) by loo...

Multi-task learning to improve natural language understanding

Recently advancements in sequence-to-sequence neural network architectur...

Towards More Robust Natural Language Understanding

Natural Language Understanding (NLU) is a branch of Natural Language Pro...

Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents

Fast expansion of natural language functionality of intelligent virtual ...

Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference

Deep learning models have achieved remarkable success in natural languag...

Why Robust Natural Language Understanding is a Challenge

With the proliferation of Deep Machine Learning into real-life applicati...

HyperMixer: An MLP-based Green AI Alternative to Transformers

Transformer-based architectures are the model of choice for natural lang...