Bias-Variance Trade-off and Overlearning in Dynamic Decision Problems

11/18/2020
by   A. Max Reppen, et al.
0

Modern Monte Carlo-type approaches to dynamic decision problems face the classical bias-variance trade-off. Deep neural networks can overlearn the data and construct feedback actions which are non-adapted to the information flow and hence, become susceptible to generalization error. We prove asymptotic overlearning for fixed training sets, but also provide a non-asymptotic upper bound on overperformance based on the Rademacher complexity demonstrating the convergence of these algorithms for sufficiently large training sets. Numerically studied stylized examples illustrate these possibilities, the dependence on the dimension and the effectiveness of this approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/15/2023

Gain coefficients for scrambled Halton points

Randomized quasi-Monte Carlo, via certain scramblings of digital nets, p...
research
12/28/2018

Reconciling modern machine learning and the bias-variance trade-off

The question of generalization in machine learning---how algorithms are ...
research
06/09/2015

Measuring Sample Quality with Stein's Method

To improve the efficiency of Monte Carlo estimation, practitioners are t...
research
05/30/2023

Non-convex Bayesian Learning via Stochastic Gradient Markov Chain Monte Carlo

The rise of artificial intelligence (AI) hinges on the efficient trainin...
research
03/22/2023

Non-asymptotic analysis of Langevin-type Monte Carlo algorithms

We study Langevin-type algorithms for sampling from Gibbs distributions ...
research
12/20/2018

Generalization error for decision problems

In this entry we review the generalization error for classification and ...
research
10/06/2021

The Variability of Model Specification

It's regarded as an axiom that a good model is one that compromises betw...

Please sign up or login with your details

Forgot password? Click here to reset