Analyzing Training Using Phase Transitions in Entropy—Part II: Application to Quantization and Classification
We show that a quantized large-scale system with unknown parameters and training signals can be analyzed by examining an equivalent system with known parameters by modifying the signal power and noise variance in a prescribed manner. Applications to training in wireless communications, signal processing, and machine learning are shown. In wireless communications, we show that the number of training signals can be significantly smaller than the number of transmitting elements. Similar conclusions can be drawn when considering the symbol error rate in signal processing applications, as long as the number of receiving elements is large enough. In machine learning with a linear classifier, we show that the misclassification rate is not sensitive to the number of classes, and is approximately inversely proportional to the size of the training set. We show that a linear analysis of this nonlinear training problem can be accurate when the thermal noise is high or the system is operating near its saturation rate.
READ FULL TEXT