I am reading Stuart Russell’s new book Human Compatible: Artificial Intelligence and the Problem of Control, and highly recommend it. A lot has been written in the past few years about advanced A.I. and its implications for humans, and this book might be the best and most comprehensive. Russell is a high-profile figure in A.I. and he strips these issues of sensationalism while also taking them seriously.
One topic he does not cover is A.I. consciousness. He addresses this near the beginning, providing the following reasons for not including consciousness in the book’s topics:
- We don’t know anything about consciousness, so should say nothing.
- Nobody is working on A.I. consciousness.
- No action has consciousness as a prerequisite.
Regarding 1, I’d say that’s fair. We are nowhere near a consensus on understanding consciousness in humans, so it seems justified to skip the discussion of artificial consciousness in this book.
Regarding 2, this is a bit of an overstatement. It’s true that, given point 1, it is not at all clear how to proceed with trying to create artificial consciousness. But there are some people trying to tackle this. I happen to be reading Russell’s book at the same time as I am reading through a new book by Susan Schneider that is entirely about A.I. consciousness. So there are at least people trying to lay some conceptual groundwork for A.I. consciousness such as how to measure it and test for it.
That brings us to point 3. Russell seems to be expressing a kind of epiphenomenalist view here: the idea that phenomenal consciousness is just “along for the ride” and doesn’t actually do anything. But again regarding point 1, epiphenomenalism is just one view amongst many regarding human consciousness, and I’d say a minority view at that.
Russell gives the following example: I give you a piece of code and ask you to analyze its behaviour, and you determine that running the code will result in the end of the world. I then additionally tell you that running the code will result in the creation of machine consciousness. But that makes no difference, and the prediction will still be the same, based on the previous analysis of the code. Russell writes that it is “competence, not consciousness, that matters.” He says that science fiction tropes about A.I. agents gaining consciousness and turning against us miss the point. But again, I think there are some assumptions here about computationalism, emergence, and epiphenomenalism that are deserving of further exploration and analysis, even if not in this book.
[Update 1/29/2020: In particular, for Russell’s scenario to make sense, some form of computationalism would need to be true, wherein you can create consciousness merely by running a program. I don’t believe that merely running a program is sufficient to create consciousness and intentionality, and the Chinese Room thought experiment is meant to demonstrate this point. Nor do I believe that consciousness is epiphenomenal.]
But what I really want to say here is to reinforce Susan Schneider’s argument, that A.I. consciousness could actually cause the A.I. to be more compassionate towards us. Here’s my summary of her argument from a previous post:
It’s natural (and good) that the prospect of conscious AI makes us think about how we would treat such an AI, but she argues that we should also think about the risks of developing superintelligent AI systems that are not conscious. The point is that it is our own consciousness as humans that makes us extend compassion and concern for other beings that we believe are conscious. In particular, we extend this to other humans (very similar to us), highly intelligent non-human animals (somewhat similar to us), and less so to other animals. If we develop superintelligent AI systems that lack consciousness, then they would also lack the ability to use that consciousness as a “springboard” for recognizing consciousness in humans and thereby extending compassion to us or feeling affinity for us.
I hope that Russell’s book gets read widely, and Schneider’s too. They are very much complementary, and both help us to understand how A.I. will affect the future of humanity.