Medium Access using Distributed Reinforcement Learning for IoTs with Low-Complexity Wireless Transceivers

04/29/2021
by   Hrishikesh Dutta, et al.
0

This paper proposes a distributed Reinforcement Learning (RL) based framework that can be used for synthesizing MAC layer wireless protocols in IoT networks with low-complexity wireless transceivers. The proposed framework does not rely on complex hardware capabilities such as carrier sensing and its associated algorithmic complexities that are often not supported in wireless transceivers of low-cost and low-energy IoT devices. In this framework, the access protocols are first formulated as Markov Decision Processes (MDP) and then solved using RL. A distributed and multi-Agent RL framework is used as the basis for protocol synthesis. Distributed behavior makes the nodes independently learn optimal transmission strategies without having to rely on full network level information and direct knowledge of behavior of other nodes. The nodes learn to minimize packet collisions such that optimal throughput can be attained and maintained for loading conditions that are higher than what the known benchmark protocols (such as ALOHA) for IoT devices without complex transceivers. In addition, the nodes are observed to be able to learn to act optimally in the presence of heterogeneous loading and network topological conditions. Finally, the proposed learning approach allows the wireless bandwidth to be fairly distributed among network nodes in a way that is not dependent on such heterogeneities. Via simulation experiments, the paper demonstrates the performance of the learning paradigm and its abilities to make nodes adapt their optimal transmission strategies on the fly in response to various network dynamics.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset