Formalizing Falsification of Causal Structure Theories for Consciousness Across Computational Hierarchies

06/12/2020
by   Jake R. Hanson, et al.
0

There is currently a global, multimillion-dollar effort to experimentally confirm or falsify neuroscience's preeminent theory of consciousness: Integrated Information Theory (IIT). Yet, recent theoretical work suggests major epistemic concerns regarding the validity of IIT and all so-called "causal structure theories". In particular, causal structure theories are based on the assumption that consciousness supervenes on a particular causal structure, despite the fact that different causal structures can lead to the same input-output behavior and global functionality. This, in turn, leads to epistemic problems when it comes to the ability to falsify such a theory - if two systems are functionally identical, what remains to justify a difference in subjective experience? Here, we ground these abstract epistemic problems in a concrete example of functionally indistinguishable systems with different causal architectures. Our example comes in the form of an isomorphic feed-forward decomposition ("unfolding") of a simple electronic tollbooth, which we use to demonstrate a clear falsification of causal structure theories such as IIT. We conclude with a brief discussion regarding the level of formal description at which a candidate measure of consciousness must operate if it is to be considered scientific.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro