Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization

11/27/2021
by   Thanh Nguyen-Tang, et al.
1

Offline policy learning (OPL) leverages existing data collected a priori for policy optimization without any active exploration. Despite the prevalence and recent interest in this problem, its theoretical and algorithmic foundations in function approximation settings remain under-developed. In this paper, we consider this problem on the axes of distributional shift, optimization, and generalization in offline contextual bandits with neural networks. In particular, we propose a provably efficient offline contextual bandit with neural network function approximation that does not require any functional assumption on the reward. We show that our method provably generalizes over unseen contexts under a milder condition for distributional shift than the existing OPL works. Notably, unlike any other OPL method, our method learns from the offline data in an online manner using stochastic gradient descent, allowing us to leverage the benefits of online learning into an offline setting. Moreover, we show that our method is more computationally efficient and has a better dependence on the effective dimension of the neural network than an online counterpart. Finally, we demonstrate the empirical effectiveness of our method in a range of synthetic and real-world OPL problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2021

Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support

We address policy learning with logged data in contextual bandits. Curre...
research
03/28/2020

Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability

We consider the general (stochastic) contextual bandit problem under the...
research
06/13/2023

Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits

We consider policy optimization in contextual bandits, where one is give...
research
10/16/2020

The Deep Bootstrap: Good Online Learners are Good Offline Generalizers

We propose a new framework for reasoning about generalization in deep le...
research
02/12/2022

Coupling Online-Offline Learning for Multi-distributional Data Streams

The distributions of real-life data streams are usually nonstationary, w...
research
09/13/2017

Optimal Learning for Sequential Decision Making for Expensive Cost Functions with Stochastic Binary Feedbacks

We consider the problem of sequentially making decisions that are reward...
research
11/11/2021

Offline Contextual Bandits for Wireless Network Optimization

The explosion in mobile data traffic together with the ever-increasing e...

Please sign up or login with your details

Forgot password? Click here to reset