The Chinese Room: Syntax, Semantics, and A.I.

Recently we talked about the Chinese Room argument. One of the aims of the argument is to show that strong A.I. (A.I. that has understanding or consciousness) is impossible. It’s easy to get caught up in the details of the hypothetical scenario — e.g. does the man in the room have a book, or has he memorized the book? — so it might be helpful to zoom out and look at a more general claim being made in the argument. I would state the claim as the following:

  1. Syntax and semantics are different, and
  2. You don’t automatically get semantics from syntax.

Syntax basically consists of rules within some system that tell us what the basic symbols are, and how they combine into complex symbols. Syntax therefore tells us about structure in the system. For example, in the natural language English, the determiner “the” and the noun “apple” can be combined into the noun phrase “the apple.” Our knowledge of English syntax tells us that “The apple is rotten” is an acceptable (i.e. grammatical) sentence, while “The is apple rotten” is not.

Semantics, in contrast, is concerned with the meaning of those symbols. When you tell me “The apple is rotten,” I understand that you are talking about something (e.g. that there is an apple, and that it has the property of being rotten). This aboutness is sometimes called intentionality.

Getting back to the Chinese Room, the point is that the person in the room is performing stricly syntactic operations. They are taking input symbols and generating output symbols, but they have no idea what the symbols mean. There is no aboutness. And since the Chinese Room is meant to be analogous to a computer, the claim is that computers are purely syntactic devices. There is no aboutness — the computer does not know what the symbols mean. Referring back to Claim 2 above, “you don’t automatically get semantics from syntax.”

(You can get some limited semantics from syntax — e.g. you can learn that words that tend to co-occur, or occur in similar contexts, are often related. This is called distributional semantics. But it doesn’t give us intentionality.)

Is there a way to get aboutness into an A.I. system? That will have to be a topic for another post, but a couple of possibilities are embodied cognition and grounded language processing.