Distributionally Robust Classification on a Data Budget

08/07/2023
by   Benjamin Feuer, et al.
0

Real world uses of deep learning require predictable model behavior under distribution shifts. Models such as CLIP show emergent natural distributional robustness comparable to humans, but may require hundreds of millions of training samples. Can we train robust learners in a domain where data is limited? To rigorously address this question, we introduce JANuS (Joint Annotations and Names Set), a collection of four new training datasets with images, labels, and corresponding captions, and perform a series of carefully controlled investigations of factors contributing to robustness in image classification, then compare those results to findings derived from a large-scale meta-analysis. Using this approach, we show that standard ResNet-50 trained with the cross-entropy loss on 2.4 million image samples can attain comparable robustness to a CLIP ResNet-50 trained on 400 million samples. To our knowledge, this is the first result showing (near) state-of-the-art distributional robustness on limited data budgets. Our dataset is available at <https://huggingface.co/datasets/penfever/JANuS_dataset>, and the code used to reproduce our experiments can be found at <https://github.com/penfever/vlhub/>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset