Online PAC-Bayes Learning

05/31/2022
by   Maxime Haddouche, et al.
59

Most PAC-Bayesian bounds hold in the batch learning setting where data is collected at once, prior to inference or prediction. This somewhat departs from many contemporary learning problems where data streams are collected and the algorithms must dynamically adjust. We prove new PAC-Bayesian bounds in this online learning framework, leveraging an updated definition of regret, and we revisit classical PAC-Bayesian results with a batch-to-online conversion, extending their remit to the case of dependent data. Our results hold for bounded losses, potentially non-convex, paving the way to promising developments in online learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2023

Learning via Wasserstein-Based High Probability Generalisation Bounds

Minimising upper bounds on the population risk or the generalisation gap...
research
10/03/2022

PAC-Bayes with Unbounded Losses through Supermartingales

While PAC-Bayes is now an established learning framework for bounded los...
research
02/14/2012

PAC-Bayesian Policy Evaluation for Reinforcement Learning

Bayesian priors offer a compact yet general means of incorporating domai...
research
04/05/2011

Online and Batch Learning Algorithms for Data with Missing Features

We introduce new online and batch algorithms that are robust to data wit...
research
10/10/2019

PAC-Bayesian Contrastive Unsupervised Representation Learning

Contrastive unsupervised representation learning (CURL) is the state-of-...
research
02/11/2022

Controlling Confusion via Generalisation Bounds

We establish new generalisation bounds for multiclass classification by ...
research
08/21/2015

Adaptive Online Learning

We propose a general framework for studying adaptive regret bounds in th...

Please sign up or login with your details

Forgot password? Click here to reset