Online Allocation and Pricing: Constant Regret via Bellman Inequalities

06/14/2019
by   Alberto Vera, et al.
0

We develop a framework for designing tractable heuristics for Markov Decision Processes (MDP), and use it to obtain constant regret policies for a variety of online allocation problems, including online packing, budget-constrained probing, dynamic pricing, and online contextual bandits with knapsacks. Our approach is based on adaptively constructing a benchmark for the value function, which we then use to select our actions. The centerpiece of our framework are the Bellman Inequalities, which allow us to create benchmarks which both have access to future information, and also, can violate the one-step optimality equations (i.e., Bellman equations). The flexibility of balancing these allows us to get policies which are both tractable and have strong performance guarantees -- in particular, our constant-regret policies only require solving an LP for selecting each action.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset