Sentience on Stilts

On Substack, Gary Marcus recently called the claim that LaMDA, or any other language model (like GPT-3), is sentient “nonsense on stilts.” Mark Coeckelbergh agreed, but with a twist. It is nonsense, he argued, not because of what we know about artificial intelligence, but because of what we don’t know about sentience. “The inconvenient truth,” he tells us at Medium, “is that we do not really know [whether LaMDA is sentient]. We do not really know because we do not know what sentience or consciousness is.” As he put it on Twitter in response to me, “we know how the language model works but we still don’t have a satisfactory definition of consciousness.” This strikes me as a rather strange philosophy.

Image Credit: Wikipedia.

Consider the Magic 8 Ball. Ask it a yes/no question and it will randomly give you one of twenty answers: 10 affirmative, 5 negative, 5 undecided. These answers are presented using familiar phrases like, “Without a doubt,” “Don’t count on it,” or “Cannot predict now.” Suppose someone asked us whether this device is sentient. Would we say, “The inconvenient truth is that we don’t know. We still don’t have a satisfactory definition of sentience”? (Presumably, we could run the same argument for the Magic 8 Ball’s alleged “clairvoyance”, which is surely not better defined than “sentience”.) Obviously not. Knowing how the device works is a sufficient basis for rejecting the claim that the device has an inner life to speak of, regardless of the fact that its output consists of recognizable linguistic tokens.

Are you sentient?
(Image credit: Wikipedia)

In his contribution to the debate in the Atlantic, Stephen Marche points out that the trouble begins with the language we use to describe our devices. To explain how the Magic 8 Ball “works”, I said that we “ask it” a question and that “it gives” us an answer. Likewise, Marche notes, the developers of language models tell us that they exhibit “impressive natural language understanding.” He warns against this kind of talk, citing a Google exec.

“I find our language is not good at expressing these things,” Zoubin Ghahramani, the vice president  of research at Google, told me. “We have words for mapping meaning between sentences and objects, and the words that we use are words like understanding. The problem is that, in a narrow sense, you could say these systems understand just like a calculator understands addition, and in a deeper sense they don’t understand. We have to take these words with a grain of salt.”

STEPHEN MARCHE, “Google’s AI Is Something Even Stranger Than Conscious,” The ATLANTIC, june 19, 2022

If you read that just a little too quickly you might miss another example of the way language misleads us about technology. “You could say that these systems understand just like a calculator understands addition,” Ghahramani says. But calculators don’t understand addition at all! Consider a series of examples I offered on Twitter:

Would we say that an abacus “understands” addition? What about a paper bag? You put two apples in it. Then you put another two apples in it. Then you have a look and there are four apples in the bag. The paper bag knows how to add? I don’t think so. If you want something that uses symbols, consider a spring scale. You calibrate it with standard weights such that 1 unit on the scale is one unit of weight. You have increasing weights labeled 1, 2, 3, 4 etc. On the tray there’s even a plus sign; you put two weights on it labelled “2” and the dial says “4”. Can the scale add? Of course not. A computer, likewise, is just a physical system that turns meaningless inputs into meaningless outputs. We understand the inputs and outputs. We imbue the output with meaning as the answer to a question.

Justin E.H. Smith wrote a thoughtful (as ever) piece about the incident on Substack. “Much of this speculation,” he suggests, “could be mercifully suspended if those involved in it just thought a little bit harder about what our own consciousness is actually like, and in particular how much it is conditioned by our embodiment and our emotion.” Note that this is basically the opposite of Coeckelbergh’s suggestion. Smith is telling us to remember what we know about sentience and consciousness from our own experience rather than get lost in the philosophy of consciousness and its lack of a “satisfactory definition” of its object. We know LaMDA is not conscious because we know it’s not sentient, and we know it’s not sentient because we know what sentience is and that it requires a body. And we know LaMDA doesn’t have one.

I note that Spike Jonze’s Her is now streaming on Netflix. When I first saw it, it occurred to me that it was actually just a story about love and loss told from inside a very clever satire of the absurdity of artificial intelligence. Descartes once said that he could imagine that he had no body. I’ve never believed him; I think he was was pretending. His “I” was literally no one ever … on philosophical stilts.

Leave a Reply

Your email address will not be published. Required fields are marked *