Scale Alone Does not Improve Mechanistic Interpretability in Vision Models

07/11/2023
by   Roland S. Zimmermann, et al.
0

In light of the recent widespread adoption of AI systems, understanding the internal information processing of neural networks has become increasingly critical. Most recently, machine vision has seen remarkable progress by scaling neural networks to unprecedented levels in dataset and model size. We here ask whether this extraordinary increase in scale also positively impacts the field of mechanistic interpretability. In other words, has our understanding of the inner workings of scaled neural networks improved as well? We here use a psychophysical paradigm to quantify mechanistic interpretability for a diverse suite of models and find no scaling effect for interpretability - neither for model nor dataset size. Specifically, none of the nine investigated state-of-the-art models are easier to interpret than the GoogLeNet model from almost a decade ago. Latest-generation vision models appear even less interpretable than older architectures, hinting at a regression rather than improvement, with modern models sacrificing interpretability for accuracy. These results highlight the need for models explicitly designed to be mechanistically interpretable and the need for more helpful interpretability methods to increase our understanding of networks at an atomic level. We release a dataset containing more than 120'000 human responses from our psychophysical evaluation of 767 units across nine models. This dataset is meant to facilitate research on automated instead of human-based interpretability evaluations that can ultimately be leveraged to directly optimize the mechanistic interpretability of models.

READ FULL TEXT
research
08/22/2023

Explicability and Inexplicability in the Interpretation of Quantum Neural Networks

Interpretability of artificial intelligence (AI) methods, particularly d...
research
07/27/2022

Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks

The last decade of machine learning has seen drastic increases in scale ...
research
05/29/2018

Human-in-the-Loop Interpretability Prior

We often desire our models to be interpretable as well as accurate. Prio...
research
04/23/2020

Learning a Formula of Interpretability to Learn Interpretable Formulas

Many risk-sensitive applications require Machine Learning (ML) models to...
research
02/02/2021

Evaluating the Interpretability of Generative Models by Interactive Reconstruction

For machine learning models to be most useful in numerous sociotechnical...
research
06/09/2017

TIP: Typifying the Interpretability of Procedures

We provide a novel notion of what it means to be interpretable, looking ...
research
03/07/2022

Interpretable part-whole hierarchies and conceptual-semantic relationships in neural networks

Deep neural networks achieve outstanding results in a large variety of t...

Please sign up or login with your details

Forgot password? Click here to reset