Is explainable AI a race against model complexity?

05/17/2022
by   Advait Sarkar, et al.
0

Explaining the behaviour of intelligent systems will get increasingly and perhaps intractably challenging as models grow in size and complexity. We may not be able to expect an explanation for every prediction made by a brain-scale model, nor can we expect explanations to remain objective or apolitical. Our functionalist understanding of these models is of less advantage than we might assume. Models precede explanations, and can be useful even when both model and explanation are incorrect. Explainability may never win the race against complexity, but this is less problematic than it seems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset