Machines and Understanding

This recent blog post by Thomas Dietterich has received much well-deserved attention. And while the title What does it mean for a machine to “understand”? sounds like a philosophical work, it’s primarily an assessment of the current state of AI and approaches to AI.

Critics of recent advances in artificial intelligence complain that although these advances have produced remarkable improvements in AI systems, these systems still do not exhibit “real”, “true”, or “genuine” understanding. The use of words like “real”, “true”, and “genuine” imply that “understanding” is binary. A system either exhibits “genuine” understanding or it does not. The difficulty with this way of thinking is that human understanding is never complete and perfect. In this article, I argue that “understanding” exists along a continuous spectrum of capabilities.

Social Perspectives on AI Understanding

I agree with Dietterich that understanding is not binary, but that’s not the main thing I want to focus on in this post. He discusses three reasons why people are inclined to say that current AI systems do not exhibit “real” understanding, but I think there is a key additional reason he omits. Here are the three reasons he mentions, in summary:

  1. There is too much hype around AI and superintelligence, and the critics are responding to this.
  2. Competition for AI funding means people working on different approaches to AI are quick to dismiss each other’s paradigms, e.g. connectionists vs. proponents of symbolic A.I.
  3. The goal-posts are always moving. There is a well-known phenomenon that when an AI problem is solved, it suddenly seems trivial and no longer counts as AI. Something else becomes “real AI.”

I think Dietterich is correct is his assessment of these reasons — and I agree, regarding point 2, that approaches should be funded as long as they are fruitful, and that includes connectionism, symbolic AI, and hybrid approaches.

But note that the three reasons above are all basically social (or maybe sociopolitical) reasons. He doesn’t mention that there might also be philosphical reasons for saying that AI does not demonstrate real understanding, other than a brief allusion to the Chinese Room thought experiment.

Philosophical Perspectives on AI Understanding

Dietterich doesn’t explicitly mention consciousness in his article, and I’d say it’s been the dominant view in the past few decades that concepts like thought and understanding can be defined without reference to consciousness. They are understood largely in computational or functional terms. So in that view a p-zombie or a very simple computer program could be said to be “thinking” something.

I’ve written several times on this site about the Chinese Room, and won’t rehash it all except to say that I am skeptical that a computer could have mental states just by virtue of running a program. But here I’m bringing up a slightly different question, of whether a computer could have “thought” or “understanding” without consciousness. As I said above, I think the dominant answer to that question has been ‘yes.’ But this may be changing.

For example, in his recent discussion with Sean Carroll, Philip Goff describes himself as a member of a “growing minority” who believe that consciousness is necessary for thought. Similarly, Galen Strawson doesn’t believe you can have intentionality (aboutness) without consciousness. If you don’t have aboutness, I don’t think you can have thought or understanding.

Good Reasons for Skepticism

This is all a long way of saying that I think there are some good reasons — philosophical reasons rather than social ones — for being skeptical about AI understanding at present. But perhaps future developments in embodied cognition and artificial consciousness will allow us to say that an artificial intelligence truly has understanding.