Stochastic Descent Analysis of Representation Learning Algorithms

by   Richard M. Golden, et al.
The University of Texas at Dallas

Although stochastic approximation learning methods have been widely used in the machine learning literature for over 50 years, formal theoretical analyses of specific machine learning algorithms are less common because stochastic approximation theorems typically possess assumptions which are difficult to communicate and verify. This paper presents a new stochastic approximation theorem for state-dependent noise with easily verifiable assumptions applicable to the analysis and design of important deep learning algorithms including: adaptive learning, contrastive divergence learning, stochastic descent expectation maximization, and active learning.


page 1

page 2

page 3

page 4


Formalization of a Stochastic Approximation Theorem

Stochastic approximation algorithms are iterative procedures which are u...

A scaled Bregman theorem with applications

Bregman divergences play a central role in the design and analysis of a ...

Nonparametric adaptive active learning under local smoothness condition

Active learning is typically used to label data, when the labeling proce...

Compensation Learning

Weighting strategy prevails in machine learning. For example, a common a...

Computations in Stochastic Acceptors

Machine learning provides algorithms that can learn from data and make i...

Synbols: Probing Learning Algorithms with Synthetic Datasets

Progress in the field of machine learning has been fueled by the introdu...

Data-driven Algorithm Selection and Parameter Tuning: Two Case studies in Optimization and Signal Processing

Machine learning algorithms typically rely on optimization subroutines a...

Please sign up or login with your details

Forgot password? Click here to reset