In Chapter 4 of the book, Schneider discusses possible tests of consciousness for artificial agents. I’ll briefly summarize the tests here and add a few comments:
The AI Consciousness Test (ACT)
The idea of the ACT is that there are some scenarios that seem to require a felt consciousness in order to truly understand the scenario. For example, as humans we can imagine being separated from our body, or switching bodies with someone else, or having an afterlife, or being reincarnated. While we may not believe these things to be true or possible, we can at least imagine them, because we are conscious beings (or so her argument goes…I’ll come back to this).
Schneider’s proposal is that an AI without felt consciousness would not be able to comprehend these scenarios at all. We could test this with a series of natural language interactions. At a basic level, we could ask how the AI conceives of itself and if it has any conception of itself that is different from its physical self. We could then ask it about scenarios such as reincarnation and the afterlife. Finally, we could see whether it is able to make novel use of consciousness concepts without our own prompting.
Schneider says the ACT is sufficient but not necessary for determining if an AI is conscious. That is, if an AI passes the ACT, we should believe it is conscious, but there may be conscious AIs that cannot pass the test (e.g. they may not have natural language capabilities).
I think the ACT is very interesting and worth exploring, but I’m not sure whether it is even sufficient for identifying consciousness. As Schneider points out, a non-conscious AI could try to convince us that it’s conscious because it thinks (for example) that we will treat it better. In that case, we may need to “box in” the AI by restricting its information sources, its capabilities, and its interactions with the wider world. Or perhaps a non-conscious AI really could think that it understands all of those concepts even thought it is missing the kind of phenomenal experience that we have.
Without going on too much of a digression, there is also the question of whether we humans can truly imagine being separated from our bodies, being reincarnated, etc. Maybe if we thought about those scenarios in great detail, we would realize that we aren’t able to comprehend them either. This is basically the question of what is the relationship between conceivability and possibility.
The Chip Test
AI-based brain enhancements, and synthetic brain chips, are already under development. There may come a time where companies are able to gradually replace parts of your brain with synthetic chips. During this process, they will analyze whether any of your phenomenal experience is impaired. When these sorts of procedures are done on a large scale, we might be able to determine which substrates and architectures are or are not sufficient for conscious experience.
I mentioned previously that the ACT test is not a necessary test for conscious AI, i.e. there could be a conscious AI that is unable to pass the ACT. In that case, the chip test could be complementary and might tell us whether an AI that was unable to pass the ACT nonetheless has the substrate and architecture that allow for conscious experience.
Integrated Information Theory (IIT)
I will only briefly mention IIT, as I have a detailed previous post on IIT, and Schneider has the same basic view as me. We both think IIT makes predictions that are highly questionable (e.g. it seems very unlikely that expander graphs are more conscious than humans). But it could still be a useful pointer towards artificial agents that might be conscious. We probably need a suite of tests for detecting consciousness, and measuring phi could certainly be one test to help us decide if an agent if conscious.
Overall I agree with Schneider that we will need a variety of complementary tests, and in particular I think the ACT is worth exploring in more detail.