In this post, I’ll review Chapter 2 of Schneider’s book Artificial You. The previous review post is here.
The Problem of AI Consciousness
In Chapter 2, Schneider introduces what she calls The Problem of AI Consciousness. This is her statement of the problem:
“The Problem of AI Consciousness: Would the processing of an AI feel a certain way, from the inside?”
She also describes the Hard Problem of Consciousness (Chalmers), and notes that The Problem of AI Consciousness is not simply the Hard Problem applied to AI. The Hard Problem takes (human) consciousness as a given and asks why it is that it feels a certain way. In contrast, The Problem of AI Consciousness asks whether an AI implemented in some non-biological substrate would feel a certain way, i.e. whether it would have conscious experience. In her discussion of potential responses to this problem, Schneider focuses on biological naturalism and techno-optimism.
Biological Naturalism
The biological naturalist believes that consciousness somehow depends on properties of biological beings, and so a non-living AI (implemented, say, in silicon) would not have conscious experience. But Schneider says that we do not know what those biological or chemical properties would be.
Much of this section deals with the Chinese Room thought experiment. Schneider is correct that Searle’s original formulation of the thought experiment is flawed, because it assumes that if a part of the system is lacking consciousness then the whole must be lacking consciousness as well. This is known as the Systems Reply. What Schneider doesn’t mention is that there is a revised formulation of the thought experiment where the person in the room has memorized the book, i.e. the person is the entire system. This version is much more compelling.
Schneider is right that the Chinese Room does not offer a strong argument for biological naturalism, but I don’t think it is supposed to. I think of it as an argument against the sort of naive computationalism she describes in the subsequent section. You can’t get AI understanding or consciousness solely by running a computer program that is doing symbol manipulation. Another way of stating the claim is that syntax and semantics are different, and we can’t get semantics and intentionality (aboutness) solely from the syntax. So while I think it is implied that something like embodiment would be required, it’s not part of the formal argument.
Techno-Optimism
Schneider describes the view of the techno-optimist by introducing a position she calls Computationalism about Consciousness:
“CAC: Consciousness can be explained computationally, and further, the computational details of a system fix the kind of conscious experiences that it has and whether it has any.”
In this view, the consciousness of an AI is independent of the substrate in which it is implemented. Schneider spends most of this time talking about isomorphs, or systems that perfectly replicate the organization of some other conscious system. The CAC position holds that such isomorphs would have the same conscious experience as the original systems that they are replicating. But Schneider points out that techno-optimists use a flawed line of reasoning. First, they assume that isomorphs are possible, but it could be that they are nomologically impossible (e.g. quantum effects in the brain that cannot be perfectly replicated), or could be technologically impossible. Second, she points out that the entire discussion of isomorphs misses the point, since we are very unlikely to create advanced AI by creating isomorphs. Or, as Schneider puts it, “AI will reach an advanced level long before isomorphs would be feasible.”
Other Thoughts
I think Schneider is right to be skeptical of both biological naturalism and techno-optimism. As she says, the truth is probably much more complex. Perhaps the computationalist view is part of the story, but it cannot be the full story. Other topics that could have been mentioned in this chapter include Integrated Information Theory (IIT), artificial life, and what panpsychists and idealists might make of the prospect of conscious AI, but that would’ve taken the discussion far afield.
Chapter 3 will look in much more detail about the prospects of consciousness engineering.