Chapter 3 of Artificial You has three basic purposes:
- Explaining why AI might develop in such a way that superintelligent AIs are not conscious.
- Explaining that we might need to explicitly engineer consciousness in AI systems if we want that.
- Explaining why we might want conscious AI systems.
Outmoding Consciousness and Cheaping Out
To explain why advanced AI systems might lack consciousness, Schneider gives two examples: “outmoding consciousness”, and “cheaping out.” To explain what outmoding consciousness means, consider driving a route that you have driven many times. You may perform this task with very little conscious awareness, since you are very familiar with the environment and the routine actions you need to perform in that environment. Perhaps you feel like you only become aware once you arrive at your destination. But when you first learned how to drive, you likely had a great deal of conscious awareness while you were driving, because everything was new to you. Something similar could happen as AI systems become more advanced. As an AI becomes more intelligent, it could rely more on nonconscious processing, and conscious awareness (assuming such a thing is possible in an AI system in the first place) could become outmoded.
“Cheaping out,” in contrast, simply means that AI will develop in whatever way is most useful and profitable for companies, and for developing products such as virtual assistants and healthcare robots. And the features required for such AI systems may not be the same features needed for consciousness. So consciousness might be something we need to explicitly add on to an AI system. In humans, phenomenal consciousness is closely related to sensory processing and a sensory processing “hot zone” in the brain, and something similar may need to be engineered into an AI in order to get consciousness.
Do We Want Conscious AI Systems?
Schneider first considers that we may not want conscious AIs, as having a virtual assistant or robot pet would in that case seem akin to having a slave. And switching an AI on or off would be akin to killing a conscious entity. She then gives three reasons why we might want conscious AIs:
- They could be more safe. As mentioned in the introduction, a superintelligent AI may be more inclined to treat us well, because it might use its own consciousness as a springboard to learn that we (humans) are likely conscious as well. That may cause it to have some affinity for us, just as humans have some affinity for animals that we believe are conscious.
- People might want them. If people pursue social relationships with robots, they may prefer that those robots have conscious awareness, just as they likely prefer that their human friends and partners are conscious and not zombies.
- The third reason Schneider cites is that conscious AIs might make better astronauts. To me, this is the weakest part of the chapter, and relates to Schneider’s own interest in interstellar travel. I don’t think she makes it very clear why we should want to seed the universe with artificial agents, let alone ones that have conscious experience.
Human-Machine Mergers
Schneider concludes the chapter by briefly describing the prospect of replacing parts of the brain with brain chips. If this is done to parts of the brain that are responsible for consciousness, it could lead to a degradation or cessation of conscious experience. It could turn out that only “biological brain enhancements” can be used in those parts of the brain so that we still retain our conscious experience. But if synthetic chips could be used for that purpose without affecting conscious experience, then that could be a path to engineering consciousness. She will discuss this more in the following chapter.
She reiterates in this chapter that she is neither a biological naturalist nor a techno-optimist, but rather takes a “wait and see” approach. There are many factors affecting whether we actually will want conscious AI systems, and whether they are nomologically possible, technologically possible, or profitable.