Understanding Negative Samples in Instance Discriminative Self-supervised Representation Learning

02/13/2021
by   Kento Nozawa, et al.
0

Instance discriminative self-supervised representation learning has been attracted attention thanks to its unsupervised nature and informative feature representation for downstream tasks. Self-supervised representation learning commonly uses more negative samples than the number of supervised classes in practice. However, there is an inconsistency in the existing analysis; theoretically, a large number of negative samples degrade supervised performance, while empirically, they improve the performance. We theoretically explain this empirical result regarding negative samples. We empirically confirm our analysis by conducting numerical experiments on CIFAR-10/100 datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset