Deep Learning: Computational Aspects

08/26/2018
by   Nicholas Polson, et al.
0

In this article we review computational aspects of Deep Learning (DL). Deep learning uses network architectures consisting of hierarchical layers of latent variables to construct predictors for high-dimensional input-output models. Training a deep learning architecture is computationally intensive, and efficient linear algebra libraries is the key for training and inference. Stochastic gradient descent (SGD) optimization and batch sampling are used to learn from massive data sets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset