A Survey on Deep-Learning based Techniques for Modeling and Estimation of MassiveMIMO Channels

10/08/2019
by   Makan Zamanipour, et al.
0

Why does the literature consider the channel state-information (CSI) as a 2/3-D image? What are the pros-and-cons of this approach in terms of accuracy-complexity trade-off? Fifth generation (5G) of wireless communications needs an extreme range of disciplines according to which a low-latency, low-traffic, high-throughput, high spectral-efficiency and low energy-consumption are guaranteed. Among a vast number of techniques and disciplined for 5G, the principle of massive multi-input multi-output (MaMIMO) is emerging. This principle can be conveniently deployed for millimeter wave (mmWave) bands in order to guarantee the criteria given above. However, practical and realistic MaMIMO transceivers suffer from a huge number of challenging bottlenecks in design. The majority part of these challenges do belong to the issue of channel estimation. Channel modeling and prediction in MaMIMO require a huge amount of effort and time in terms of computational complexity due to a high number of antenna sets and supported users. This complexity lies mainly upon the feedback-overhead which even degrades the pilot-data trade-off in the uplink (UL)/downlink (DL) design. This paper studies the novel deep-learning (DLg) driven techniques recently proposed in the literature which tackle the challenges discussed-above. This survey finally take a look at possible work in the future.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset