Inter-model Interpretability: Self-supervised Models as a Case Study

by   Ahmad Mustapha, et al.
American University of Beirut

Since early machine learning models, metrics such as accuracy and precision have been the de facto way to evaluate and compare trained models. However, a single metric number doesn't fully capture the similarities and differences between models, especially in the computer vision domain. A model with high accuracy on a certain dataset might provide a lower accuracy on another dataset, without any further insights. To address this problem we build on a recent interpretability technique called Dissect to introduce inter-model interpretability, which determines how models relate or complement each other based on the visual concepts they have learned (such as objects and materials). Towards this goal, we project 13 top-performing self-supervised models into a Learned Concepts Embedding (LCE) space that reveals proximities among models from the perspective of learned concepts. We further crossed this information with the performance of these models on four computer vision tasks and 15 datasets. The experiment allowed us to categorize the models into three categories and revealed for the first time the type of visual concepts different tasks requires. This is a step forward for designing cross-task learning algorithms.


page 3

page 4

page 5

page 8

page 10

page 13

page 14

page 16


Measuring the Interpretability of Unsupervised Representations via Quantized Reverse Probing

Self-supervised visual representation learning has recently attracted si...

Using Multiple Self-Supervised Tasks Improves Model Robustness

Deep networks achieve state-of-the-art performance on computer vision ta...

CVSNet: A Computer Implementation for Central Visual System of The Brain

In computer vision, different basic blocks are created around different ...

Visualizing and Understanding Self-Supervised Vision Learning

Self-Supervised vision learning has revolutionized deep learning, becomi...

Rosetta Neurons: Mining the Common Units in a Model Zoo

Do different neural networks, trained for various vision tasks, share so...

Learning to Resize Images for Computer Vision Tasks

For all the ways convolutional neural nets have revolutionized computer ...

Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors

Robustness of machine learning models on ever-changing real-world data i...

Please sign up or login with your details

Forgot password? Click here to reset