Shrewd Selection Speeds Surfing: Use Smart EXP3!

by   Anuja Meetoo Appavoo, et al.

In this paper, we explore the use of multi-armed bandit online learning techniques to solve distributed resource selection problems. As an example, we focus on the problem of network selection. Mobile devices often have several wireless networks at their disposal. While choosing the right network is vital for good performance, a decentralized solution remains a challenge. The impressive theoretical properties of multi-armed bandit algorithms, like EXP3, suggest that it should work well for this type of problem. Yet, its real-word performance lags far behind. The main reasons are the hidden cost of switching networks and its slow rate of convergence. We propose Smart EXP3, a novel bandit-style algorithm that (a) retains the good theoretical properties of EXP3, (b) bounds the number of switches, and (c) yields significantly better performance in practice. We evaluate Smart EXP3 using simulations, controlled experiments, and real-world experiments. Results show that it stabilizes at the optimal state, achieves fairness among devices and gracefully deals with transient behaviors. In real world experiments, it can achieve 18 download over alternate strategies. We conclude that multi-armed bandit algorithms can play an important role in distributed resource selection problems, when practical concerns, such as switching costs and convergence time, are addressed.


page 1

page 2

page 3

page 4


Cooperation Speeds Surfing: Use Co-Bandit!

In this paper, we explore the benefit of cooperation in adversarial band...

Phase Transitions and Cyclic Phenomena in Bandits with Switching Constraints

We consider the classical stochastic multi-armed bandit problem with a c...

A Multi-Armed Bandit-based Approach to Mobile Network Provider Selection

We argue for giving users the ability to lease bandwidth temporarily fro...

Markov Game with Switching Costs

We study a general Markov game with metric switching costs: in each roun...

Decentralized Smart Charging of Large-Scale EVs using Adaptive Multi-Agent Multi-Armed Bandits

The drastic growth of electric vehicles and photovoltaics can introduce ...

Multi-user Communication Networks: A Coordinated Multi-armed Bandit Approach

Communication networks shared by many users are a widespread challenge n...

Reinforcement-based Simultaneous Algorithm and its Hyperparameters Selection

Many algorithms for data analysis exist, especially for classification p...

Please sign up or login with your details

Forgot password? Click here to reset