Integrated Information Theory (IIT) is a theory that says consciousness is a property of systems with integrated information, and that we can (in theory) measure how much consciousness a system has (its phi value) by measuring its integrated information. Very roughly speaking, a system with a high level of integrated information will be much more than the sum of its parts.
A bit of a side note: IIT is usually described as panpsychist, because any system that has some non-zero amount of integrated information will have some level of consciousness, according to IIT. This is different from versions of panpsychism that say consciousness is a fundamental aspect of matter (i.e. micropsychism).
While this post will mostly be critical of IIT, there are laudable aspects of the theory. For example, it takes conscious experience as a given — as opposed to illusionism. It is also mathematically precise enough that it can be used to make predictions, and we can try to evaluate those predictions.
That’s what Scott Aaronson set out to do in this blog post. It’s a long, interesting read, with a fair bit of mathematical detail. TLDR: according to IIT, fairly simple graph structures (expander graphs) that are used in computing and mathematics have high levels of integrated information and therefore should have conscious experience. Aaronson shows that such data structures can be made to have arbitrarily high levels of integrated information — even more than the human brain has. So these graph structures with phi values that are higher than human phi values should be more conscious than humans.
That’s the central criticism of Aaronson’s piece: any theory that makes such predictions has to be wrong. So IIT must be wrong.
The creator of IIT, Giulio Tononi, accepts that these are indeed the predictions of IIT, but rejects Aaronson’s conclusions. Someone who is sympathetic to IIT, such as Tononi, can make some decent rebuttals to Aaronson: namely, we shouldn’t expect our “folk” intuitions about consciousness to be correct, and all theories of consciousness have strange and counterintuitive aspects to them. We should be prepared to let go of some of our intuitions as we learn more about consciousness. Maybe we should just go where the theory goes and accept that some graph structures are more conscious than we are. One could draw a parallel with quantum mechanics and say that reality is often strange and counter-intuitive and we just have to accept it.
But I agree with Aaronson that some of these predictions of IIT must be wrong, or at least they are not at all justified. The problem, then, is how to formalize and defend some of those intuitions — particularly the intuition that a simple data structure should not be more conscious than a human. One could take a strategy of arguing from analogy, similar to the strategy often used when discussing the problem of other minds. Roughly, we are likely to acribe consciousness to entities that are similar to us: particularly other people, but also some types of animals. So we could try to draw up a rule such as: an entity that it is highly dissimilar to us should not have more consciousness than an entity that is more similar to us.
I’m not sure if that is a good rule. We can imagine cases such as a hypothetical extraterrestrial intelligence that is very different from us in both material and behavioural terms yet possesses extremely advanced intelligence. But such a rule seems to be the direction that Aaronson is going here:
When we consider whether to accept IIT’s equation of integrated information with consciousness, we don’t start with any agreed-upon, independent notion of consciousness against which the new notion can be compared. The main things we start with, in my view, are certain paradigm-cases that gesture toward what we mean:
>> You are conscious (though not when anesthetized).
>> (Most) other people appear to be conscious, judging from their behavior.
>> Many animals appear to be conscious, though probably to a lesser degree than humans (and the degree of consciousness in each particular species is far from obvious).
>> A rock is not conscious. A wall is not conscious. A Reed-Solomon code is not conscious. Microsoft Word is not conscious (though a Word macro that passed the Turing test conceivably would be).
In other words, we define what we mean by consciousness by referring to the paradigm-cases listed above, such as ourselves, other people, and some animals. Then we can evaluate a theory’s predictions by seeing what it says about these cases. But I would remove the final bullet point above from Aaronson (“A rock is not conscious…” etc). We might need to allow that consciousness could be present in places that we don’t expect. Perhaps we could replace it with the rule I suggested above: an entity that it is highly dissimilar to us should not have more consciousness than an entity that is more similar to us.
John Horgan also gets at this general idea:
Moreover, like all theories of consciousness, IIT slams into the solipsism problem (which is at the heart of Aaronson’s critique). As far as I know, I am the only conscious entity in the cosmos. I confidently infer that things like me—such as other humans–are also conscious, but my confidence wanes when I consider things less like me, such as compact discs and dark energy.
Philip Ball has a fairly recent article on efforts to evaluate IIT and global workspace theory. It’s a good thing that we have theories that are detailed enough to test. But, as Francis Fallon says in that article, it may ultimately be more of a philosophical debate than an empirical one.