Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

01/23/2017
by   Noam Shazeer, et al.
0

The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

Mixture of Attention Heads: Selecting Attention Heads Per Token

Mixture-of-Experts (MoE) networks have been proposed as an efficient way...
research
07/14/2018

Recurrent Stacking of Layers for Compact Neural Machine Translation Models

In Neural Machine Translation (NMT), the most common practice is to stac...
research
06/28/2014

Exponentially Increasing the Capacity-to-Computation Ratio for Conditional Computation in Deep Learning

Many state-of-the-art results obtained with deep networks are achieved w...
research
11/13/2018

Modular Networks: Learning to Decompose Neural Computation

Scaling model capacity has been vital in the success of deep learning. F...
research
06/18/2021

Recurrent Stacking of Layers in Neural Networks: An Application to Neural Machine Translation

In deep neural network modeling, the most common practice is to stack a ...
research
11/18/2022

Who Says Elephants Can't Run: Bringing Large Scale MoE Models into Cloud Scale Production

Mixture of Experts (MoE) models with conditional execution of sparsely a...
research
06/30/2020

GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding

Neural network scaling has been critical for improving the model quality...

Please sign up or login with your details

Forgot password? Click here to reset