I’ve written about the Chinese Room a couple of times now. In an initial post, I gave a basic overview and discussed the Systems Reply. In a follow-up, I talked about syntax, semantics, and intentionality. In this post, I’ll briefly discuss another reply.
This criticism is called the Other Minds reply. The basic gist is to start with an argument against solipsism. We can only be sure of our own consciousness, and it is conceivable that the other people with whom we interact do not have any inner experience at all, as we do. And yet we use a type of argument from analogy, noting that other people behave similarly as us and are made of similar stuff, so we grant that they are probably conscious too. Alan Turing believed that we should extend this “polite convention” to machines that exhibit intelligent behaviour as well, anticipating an argument such as the Chinese Room.
The problem is that an argument from analogy breaks down at some point, as the shared properties disappear. We do not have a clear idea of how phenomenal consciousness arises in humans. But we do have a clear idea of how computer programs are implemented. We can start with a high-level program (written in Python, say) and understand it all the way down to the hardware. We can understand it all the way down to basic operations such as “add these two numbers together” and “store the result in this register.” And, however we might think our own phenomenal consciousness arises, it is not clear why we would acribe similar consciousness to machines doing such symbol manipulation. The argument from analogy is just not strong enough in this case — unless we could show that our own phenomenal consciousness also arises merely from doing such symbol manipulation on a massive scale.
[Image: Wikimedia Commons, Harland Quarrington]