Fast reinforcement learning for decentralized MAC optimization

05/17/2018
by   Eleni Nisioti, et al.
0

In this paper, we propose a novel decentralized framework for optimizing the transmission strategy of Irregular Repetition Slotted ALOHA (IRSA) protocol in sensor networks. We consider a hierarchical communication framework that ensures adaptivity to changing network conditions and does not require centralized control. The proposed solution is inspired by the reinforcement learning literature, and, in particular, Q-learning. To deal with sensor nodes' limited lifetime and communication range, we allow them to decide how many packet replicas to transmit considering only their own buffer state. We show that this information is sufficient and can help avoiding packets' collisions and improving the throughput significantly. We solve the problem using the decentralized partially observable Markov Decision Process (Dec-POMDP) framework, where we allow each node to decide independently of the others how many packet replicas to transmit. We enhance the proposed Q-learning based method with the concept of virtual experience, and we theoretically and experimentally prove that convergence time is, thus, significantly reduced. The experiments prove that our method leads to large throughput gains, in particular when network traffic is heavy, and scales well with the size of the network. To comprehend the effect of the problem's nature on the learning dynamics and vice versa, we investigate the waterfall effect, a severe degradation in performance above a particular traffic load, typical for codes-on-graphs and prove that our algorithm learns to alleviate it.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset