The Labeling Distribution Matrix (LDM): A Tool for Estimating Machine Learning Algorithm Capacity

12/23/2019
by   Pedro Sandoval Segura, et al.
0

Algorithm performance in supervised learning is a combination of memorization, generalization, and luck. By estimating how much information an algorithm can memorize from a dataset, we can set a lower bound on the amount of performance due to other factors such as generalization and luck. With this goal in mind, we introduce the Labeling Distribution Matrix (LDM) as a tool for estimating the capacity of learning algorithms. The method attempts to characterize the diversity of possible outputs by an algorithm for different training datasets, using this to measure algorithm flexibility and responsiveness to data. We test the method on several supervised learning algorithms, and find that while the results are not conclusive, the LDM does allow us to gain potentially valuable insight into the prediction behavior of algorithms. We also introduce the Label Autoencoder as an additional tool for estimating algorithm capacity, with more promising initial results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset