Online Pricing with Offline Data: Phase Transition and Inverse Square Law

10/19/2019
by   Jinzhi Bu, et al.
0

This paper investigates the impact of pre-existing offline data on online learning, in the context of dynamic pricing. We study a single-product dynamic pricing problem over a selling horizon of T periods. The demand in each period is determined by the price of the product according to a linear demand model with unknown parameters. We assume that an incumbent price has been tested for n periods in the offline stage before the start of the selling horizon, and the seller has collected n demand observations under the incumbent price from the market. The seller wants to utilize both the pre-existing offline data and the sequential online data to minimize the regret of the online learning process. In the well-separated case where the absolute difference between the incumbent price and the optimal price δ is lower bounded by a known constant, we prove that the best achievable regret is Θ̃(√(T)∧ (T/n∨log T)), and show that certain variants of the greedy policy achieve this bound. In the general case where δ is not necessarily lower bounded by a known constant, we prove that the best achievable regret is Θ̃(√(T)∧ (T/nδ^2∨log T/δ^2)), and construct a learning algorithm based on the "optimism in the face of uncertainty" principle, whose regret is optimal up to a logarithm factor. In both cases, our results reveal surprising transformations of the optimal regret rate with respect to the size of offline data, which we refer to as phase transitions. In addition, our result demonstrates that the shape of offline data, measured by δ, also has an intrinsic effect on the optimal regret, and we quantify this effect via the inverse-square law.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset