Optimal Learning from the Doob-Dynkin lemma

01/03/2018
by   Gunnar Taraldsen, et al.
0

The Doob-Dynkin Lemma gives conditions on two functions X and Y that ensure existence of a function ϕ so that X = ϕ∘ Y. This communication proves different versions of the Doob-Dynkin Lemma, and shows how it is related to optimal statistical learning algorithms. Keywords and phrases: Improper prior, Descriptive set theory, Conditional Monte Carlo, Fiducial, Machine learning, Complex data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2020

Simple conditions for convergence of sequential Monte Carlo genealogies with applications

Sequential Monte Carlo algorithms are popular methods for approximating ...
research
06/25/2019

Monte Carlo Gradient Estimation in Machine Learning

This paper is a broad and accessible survey of the methods we have at ou...
research
10/14/2020

Conditional Monte Carlo revisited

Conditional Monte Carlo refers to sampling from the conditional distribu...
research
05/02/2023

Data valuation: The partial ordinal Shapley value for machine learning

Data valuation using Shapley value has emerged as a prevalent research d...
research
11/11/2019

Markov chains in random environment with applications in queueing theory and machine learning

We prove the existence of limiting distributions for a large class of Ma...
research
05/11/2023

On the Eight Levels theorem and applications towards Lucas-Lehmer primality test for Mersenne primes, I

Lucas-Lehmer test is the current standard algorithm used for testing the...
research
12/01/2020

mlOSP: Towards a Unified Implementation of Regression Monte Carlo Algorithms

We introduce mlOSP, a computational template for Machine Learning for Op...

Please sign up or login with your details

Forgot password? Click here to reset