Learning to Order for Inventory Systems with Lost Sales and Uncertain Supplies

by   Boxiao Chen, et al.

We consider a stochastic lost-sales inventory control system with a lead time L over a planning horizon T. Supply is uncertain, and is a function of the order quantity (due to random yield/capacity, etc). We aim to minimize the T-period cost, a problem that is known to be computationally intractable even under known distributions of demand and supply. In this paper, we assume that both the demand and supply distributions are unknown and develop a computationally efficient online learning algorithm. We show that our algorithm achieves a regret (i.e. the performance gap between the cost of our algorithm and that of an optimal policy over T periods) of O(L+√(T)) when L≥log(T). We do so by 1) showing our algorithm cost is higher by at most O(L+√(T)) for any L≥ 0 compared to an optimal constant-order policy under complete information (a well-known and widely-used algorithm) and 2) leveraging its known performance guarantee from the existing literature. To the best of our knowledge, a finite-sample O(√(T)) (and polynomial in L) regret bound when benchmarked against an optimal policy is not known before in the online inventory control literature. A key challenge in this learning problem is that both demand and supply data can be censored; hence only truncated values are observable. We circumvent this challenge by showing that the data generated under an order quantity q^2 allows us to simulate the performance of not only q^2 but also q^1 for all q^1<q^2, a key observation to obtain sufficient information even under data censoring. By establishing a high probability coupling argument, we are able to evaluate and compare the performance of different order policies at their steady state within a finite time horizon. Since the problem lacks convexity, we develop an active elimination method that adaptively rules out suboptimal solutions.


page 1

page 2

page 3

page 4


Sublinear Regret for Learning POMDPs

We study the model-based undiscounted reinforcement learning for partial...

Nearly Optimal Policy Optimization with Stable at Any Time Guarantee

Policy optimization methods are one of the most widely used classes of R...

Online Learning for Unknown Partially Observable MDPs

Solving Partially Observable Markov Decision Processes (POMDPs) is hard....

Online Pricing with Offline Data: Phase Transition and Inverse Square Law

This paper investigates the impact of pre-existing offline data on onlin...

Online Learning for Equilibrium Pricing in Markets under Incomplete Information

The study of market equilibria is central to economic theory, particular...

Matching Impatient and Heterogeneous Demand and Supply

Service platforms must determine rules for matching heterogeneous demand...

Please sign up or login with your details

Forgot password? Click here to reset