I think John Searle is largely right in his skepticism about strong A.I. — as I describe here, here, and here in the context of the Chinese Room argument — but that he is also too dismissive about concerns relating to the risks of superintelligent A.I. Consider, for example, his article What Your Computer Can’t Know (PDF warning), in which he discusses Nick Bostrom’s book Superintelligence.
Searle seems to think that superintelligent A.I. could only be an existential threat to us if it had “psychological reality” and “malicious motivation.” In other words, it would have to be conscious, and to have its own desires and motivations, and in particular would need to consciously develop its own motivations that are opposed to humans and human goals. And, indeed, many people who write (often sensationalist) pieces about superintelligent A.I. do seem to have a naive belief that conscious A.I. is feasible and even realizable in the relatively near term. I agree with Searle in being skeptical about conscious A.I. but disagree that consciousness is necessary for an A.I. to pose an existential threat.
Consider a simple example where an A.I. is designed (by a conscious human) to have a goal of achieving A. The A.I. may later determine that X, Y, and Z are sub-goals that would effectively realize the goal of A. But X, Y, and Z may be routes to achieving A that no human designer ever anticipated, and which could be catastrophic for humans. Think of Bostrom’s paperclip maximizer, for example. Notice that there is no need to appeal to consciousness, psychological reality, or malicious motivation in order to understand the risk being described here.
A.I. systems that learn from experience or learn from data can also develop behaviours that human designers did not anticipate. Here is just one article on how A.I. systems can end up intentionally amplifying racism and sexism, in ways that were usually not intended by the human designers. Again, A.I. agents possessing consciousness or psychological reality are not required for this scenario to unfold — just unintended consequences, or unseen routes to a goal.
It’s worth noting that those of us who work in A.I. seem to have less concern about superintelligent A.I. than some high-profile people outside the field do. That could be because we have a clearer sense of how the systems work and a more realistic assessment of the state of the art — or, less charitably, it could be that many people working in A.I. are simply more interested in engineering challenges than in ethics and safety. I don’t think that’s entirely the case, though. There is growing recognition, even just in the past 2-3 years, about the importance of ethical and safe A.I. systems, with many people working on topics like interpretability and fairness. And, in general, we are more concerned about this than about superintelligence, because we see that flawed, biased systems are currently being deployed and used, while superintelligence remains a vague threat further down the road.
It’s good that we have people like Bostrom and Eliezer Yudkowsky writing about the risks of superintelligent A.I., but probably also appropriate that we have more people working on immediately pressing ethical and safety issues.