A.I., Common Sense, and Thinking

On the question of “Can computers think?”, I’ve had a few posts focusing on the Chinese Room argument. In a nutshell, it’s an argument against strong A.I. and the claim is that a computer can’t be in a mental state simply as a result of executing a program. It’s a view with which I largely agree, and you can click the above link to see more extended discussions of the arguments and responses.

A different response is to say that thinking requires things like commonsense reasoning that cannot be captured by the types of rules used in a computational system. As an example, this view was expressed by Hubert Dreyfus in his 1972 book What Computers Still Can’t Do. This argument really only works as a criticism of classical A.I. (sometimes called GOFAI, or good old-fashioned A.I.). And classical A.I. was Dreyfus’s target, and he was right in predicting that it would largely fail in its aims. But that doesn’t mean that computational systems are unable to engage in commonsense reasoning — just that GOFAI was not the way forward.

As many readers will know, much of current A.I. is based on machine learning, where an agent can learn from data and from experience, and can adapt its behaviour. This is a sharp contrast to brittle, rule-based expert systems and classical A.I. Commonsense reasoning is a growing field within A.I., as is embodied cognition — the idea, roughly, that our cognition involves much more than just what is happening in the brain. Perhaps it is too early to predict how much progress will be made in these sub-fields of A.I., but we should be skeptical of claims that computational systems simply cannot engage in commonsense reasoning.