Thompson Sampling for Combinatorial Network Optimization in Unknown Environments

07/07/2019
by   Alihan Hüyük, et al.
0

Influence maximization, item recommendation, adaptive routing and dynamic spectrum allocation all require choosing the right action from a large set of alternatives. Thanks to the advances in combinatorial optimization, these and many similar problems can be efficiently solved given that the stochasticity of the environment is perfectly known. In this paper, we take this one step further and focus on combinatorial optimization in unknown environments. All of these settings fit into the general combinatorial learning framework called combinatorial multi-armed bandit with probabilistically triggered arms. We consider a very powerful Bayesian algorithm, Combinatorial Thompson Sampling (CTS), and analyze its regret under the semi-bandit feedback model. Assuming that the learner does not know the expected base arm outcomes beforehand but has access to an exact oracle, we show that when the expected reward is Lipschitz continuous in the expected base arm outcomes CTS achieves O(∑_i =1^m T / (p_i Δ_i)) regret, where m denotes the number of base arms, p_i denotes the minimum non-zero triggering probability of base arm i, Δ_i denotes the minimum suboptimality gap of base arm i and T denotes the time horizon. In addition, we prove that when triggering probabilities are at least p^*>0 for all base arms, CTS achieves O(1/p^*(1/p^*)) regret independent of the time horizon. We also numerically compare CTS with algorithms that use the principle of optimism in the face of uncertainty in several combinatorial networking problems, and show that CTS outperforms these algorithms by at least an order of magnitude in the majority of the cases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset