Routing in an Uncertain World: Adaptivity, Efficiency, and Equilibrium
We consider the traffic assignment problem in nonatomic routing games where the players' cost functions may be subject to random fluctuations (e.g., weather disturbances, perturbations in the underlying network, etc.). We tackle this problem from the viewpoint of a control interface that makes routing recommendations based solely on observed costs and without any further knowledge of the system's governing dynamics – such as the network's cost functions, the distribution of any random events affecting the network, etc. In this online setting, learning methods based on the popular exponential weights algorithm converge to equilibrium at an 𝒪(1/√(T)) rate: this rate is known to be order-optimal in stochastic networks, but it is otherwise suboptimal in static networks. In the latter case, it is possible to achieve an 𝒪(1/T^2) equilibrium convergence rate via the use of finely tuned accelerated algorithms; on the other hand, these accelerated algorithms fail to converge altogether in the presence of persistent randomness, so it is not clear how to achieve the "best of both worlds" in terms of convergence speed. Our paper seeks to fill this gap by proposing an adaptive routing algortihm with the following desirable properties: (i) it seamlessly interpolates between the 𝒪(1/T^2) and 𝒪(1/√(T)) rates for static and stochastic environments respectively; (ii) its convergence speed is polylogarithmic in the number of paths in the network; (iii) the method's per-iteration complexity and memory requirements are both linear in the number of nodes and edges in the network; and (iv) it does not require any prior knowledge of the problem's parameters.
READ FULL TEXT