The Generalization Ability of Online Algorithms for Dependent Data

10/11/2011
by   Alekh Agarwal, et al.
0

We study the generalization performance of online learning algorithms trained on samples coming from a dependent source of data. We show that the generalization error of any stable online algorithm concentrates around its regret--an easily computable statistic of the online performance of the algorithm--when the underlying ergodic process is β- or ϕ-mixing. We show high probability error bounds assuming the loss function is convex, and we also establish sharp convergence rates and deviation bounds for strongly convex losses and several linear prediction problems such as linear and logistic regression, least-squares SVM, and boosting on dependent data. In addition, our results have straightforward applications to stochastic optimization with dependent data, and our analysis requires only martingale convergence arguments; we need not rely on more powerful statistical tools such as empirical process theory.

READ FULL TEXT
research
05/11/2013

On the Generalization Ability of Online Learning Algorithms for Pairwise Loss Functions

In this paper, we study the generalization properties of online learning...
research
05/13/2013

Boosting with the Logistic Loss is Consistent

This manuscript provides optimization guarantees, generalization bounds,...
research
01/22/2013

Online Learning with Pairwise Loss Functions

Efficient online learning with pairwise loss functions is a crucial comp...
research
08/15/2023

High-Probability Risk Bounds via Sequential Predictors

Online learning methods yield sequential regret bounds under minimal ass...
research
07/18/2012

Stochastic optimization and sparse statistical recovery: An optimal algorithm for high dimensions

We develop and analyze stochastic optimization algorithms for problems i...
research
11/12/2020

Towards Optimal Problem Dependent Generalization Error Bounds in Statistical Learning Theory

We study problem-dependent rates, i.e., generalization errors that scale...
research
02/10/2020

Stochastic Online Optimization using Kalman Recursion

We study the Extended Kalman Filter in constant dynamics, offering a bay...

Please sign up or login with your details

Forgot password? Click here to reset