A Statistical Theory of Deep Learning via Proximal Splitting

09/20/2015
by   Nicholas G. Polson, et al.
0

In this paper we develop a statistical theory and an implementation of deep learning models. We show that an elegant variable splitting scheme for the alternating direction method of multipliers optimises a deep learning objective. We allow for non-smooth non-convex regularisation penalties to induce sparsity in parameter weights. We provide a link between traditional shallow layer statistical models such as principal component and sliced inverse regression and deep layer models. We also define the degrees of freedom of a deep learning predictor and a predictive MSE criteria to perform model selection for comparing architecture designs. We focus on deep multiclass logistic learning although our methods apply more generally. Our results suggest an interesting and previously under-exploited relationship between deep learning and proximal splitting techniques. To illustrate our methodology, we provide a multi-class logit classification analysis of Fisher's Iris data where we illustrate the convergence of our algorithm. Finally, we conclude with directions for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2015

Proximal Algorithms in Statistics and Machine Learning

In this paper we develop proximal methods for statistical learning. Prox...
research
06/01/2017

Deep Learning: A Bayesian Perspective

Deep learning is a form of machine learning for nonlinear high dimension...
research
06/26/2021

Deep Learning Partial Least Squares

High dimensional data reduction techniques are provided by using partial...
research
10/22/2021

Merging Two Cultures: Deep and Statistical Learning

Merging the two cultures of deep and statistical learning provides insig...
research
02/08/2020

Predictive online optimisation with applications to optical flow

Online optimisation revolves around new data being introduced into a pro...
research
03/18/2021

Error Analysis of Douglas-Rachford Algorithm for Linear Inverse Problems: Asymptotics of Proximity Operator for Squared Loss

Proximal splitting-based convex optimization is a promising approach to ...
research
06/19/2021

Reassessing Measures for Press Freedom

There has been a newly refound interest in press freedom in the face of ...

Please sign up or login with your details

Forgot password? Click here to reset