Monthly Archives: June 2022

The Anxiety of Artifice

“One great use of words,” said Voltaire, “is to hide our thoughts.” In his famous treatise The Concept of Anxiety, Kierkegaard picked up on this idea, via Young and Talleyrand, and put a distinctively mischievous spin on it. We do, indeed, use language to hide our thoughts, he said, “namely, the fact that we don’t have any .” This is a great way to get into the core of my anxieties about artificial intelligence in general, and large language models like GPT-3 and LaMDA specifically. After all, I’m entirely certain that they have no conscious thoughts, but at least one person who is very close to the action, Blake Lemoine at Google, has been persuaded by their facility with language that they do. For my part, I’m concerned that the presumption that people generally use language to say what they think is being undermined by the apparent ability of unthinking machines to talk.

Now, my concern is mainly with academic or scholarly writing, i.e., writing done by students and faculty in universities. My working definition of this kind of writing has always been that it is the art of writing down what you know for the purpose of discussing it with other knowledgeable people. But this definition is of course a rather earnest one (some would say it is outright quaint) when compared with more cynical definitions that are, I should add, sometimes put forward without a hint of irony. Academic writing, it is said, is the art of putting words together in way that meets the expectations of your teachers; scholarly writing is the art of getting something past your reviewers so that it will be published and impress your tenure committee. On this view, that is, we use language at university, not to tell each other what we know, but to hide what we don’t know from each other, or, as Kierkegaard might suggest, the fact that we don’t really know anything at all. This is not a pleasant thing to think about for a writing instructor.

Two recent pieces in the Economist provide me with a good way of framing my concerns. “Neural language models aren’t long programs,” Blaise Agüera y Arcas tells us; “you could scroll through the code in a few seconds. They consist mainly of instructions to add and multiply enormous tables of numbers together.” Basically, these programs just convert some text into numbers, look up some other numbers in a database, carry out some calculations, the results of which are used to update the database, and are then also converted into a string of text. That’s all. What is confusing is that Agüera y Arcas then goes on to say that “since social interaction requires us to model one another, effectively predicting (and producing) human dialogue forces LaMDA to learn how to model people too.” His description of the program clearly says that it doesn’t “model people” at all. We might say that it uses language to hide the fact that it doesn’t have a model of people.

“There are no concepts behind the GPT-3 scenes,” Douglas Hofstadter explains; “rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers.” But he, too, ends up being “strangely” optimistic about where this could go if we just turn up the computing power.

This is not to say that a combination of neural-net architectures that involve visual and auditory perception, physical actions in the world, language and so forth, might not eventually be able to formulate genuinely flexible concepts and recognise absurd inputs for what they are. But that still wouldn’t amount to consciousness. For consciousness to emerge would require that the system come to know itself, in the sense of being very familiar with its own behaviour, its own predilections, its own strengths, its own weaknesses and more. It would require the system to know itself as well as you or I know ourselves. That’s what I’ve called a “strange loop” in the past, and it’s still a long way off.

How far off? I don’t know. My record for predicting the future isn’t particularly impressive, so I wouldn’t care to go out on a limb. We’re at least decades away from such a stage, perhaps more. But please don’t hold me to this, since the world is changing faster than I ever expected it to. 

I feel at something of a disadvantage with people like this because they understand how the technology works better than I do and seem to see a potential it that I don’t. That is, after trying to understand how they tell me it works, I conclude that intelligent language models aren’t just “a long way off” but are simply impossible to imagine. But then they tell me that they think these are all possibilities that we can expect to see even within a few decades. Some promoters of this technology even tell me that the systems already “model”, “reason”, “perceive”, “respond” intelligently. But looking at the technical details (within my limited ability to understand them) I simply don’t see them modeling anything — no more than a paper bag can add, as I like to put it, just because if you put two apples in there, and then another two, there are four.

My view is that we haven’t taken a step towards artificial intelligence since we invented the statue and the abacus. We have always been able to make things that look like people and other things that help us do things with our minds. The fantasy (or horror) of a making something with a mind like ours is also nothing new. In other words, my worry is not that the machines will become conscious, but that we will one day be persuaded that unconscious machines are thinking.

At a deeper level, my worry is that machines will become impressive enough in their mindless output to suggest to students and scholars that their efforts to actually have thoughts of their own are wasted, that the idea of thinking something through, understanding it, knowing what you’re talking about, etc. will be seen as a quaint throwback to a bygone era when getting your writing done actually demanded the inconvenience of making up your mind about something. Since their task is only to “produce a text” (for grading or publication) and since a machine can do that simply by predicting what a good answer to a prompt might be, they might think it entirely unnecessary to learn, believe, or know anything at all to succeed.

That is, I worry that artificial intelligence will give scope to Kierkegaard’s anxiety. Perhaps, guided by ever more sophisticated language models, academic discourse will become merely a game of probabilities. What is the sequence of words that is most likely to get me the grade I want or the publication I need?

Sentience on Stilts

On Substack, Gary Marcus recently called the claim that LaMDA, or any other language model (like GPT-3), is sentient “nonsense on stilts.” Mark Coeckelbergh agreed, but with a twist. It is nonsense, he argued, not because of what we know about artificial intelligence, but because of what we don’t know about sentience. “The inconvenient truth,” he tells us at Medium, “is that we do not really know [whether LaMDA is sentient]. We do not really know because we do not know what sentience or consciousness is.” As he put it on Twitter in response to me, “we know how the language model works but we still don’t have a satisfactory definition of consciousness.” This strikes me as a rather strange philosophy.

Image Credit: Wikipedia.

Consider the Magic 8 Ball. Ask it a yes/no question and it will randomly give you one of twenty answers: 10 affirmative, 5 negative, 5 undecided. These answers are presented using familiar phrases like, “Without a doubt,” “Don’t count on it,” or “Cannot predict now.” Suppose someone asked us whether this device is sentient. Would we say, “The inconvenient truth is that we don’t know. We still don’t have a satisfactory definition of sentience”? (Presumably, we could run the same argument for the Magic 8 Ball’s alleged “clairvoyance”, which is surely not better defined than “sentience”.) Obviously not. Knowing how the device works is a sufficient basis for rejecting the claim that the device has an inner life to speak of, regardless of the fact that its output consists of recognizable linguistic tokens.

Are you sentient?
(Image credit: Wikipedia)

In his contribution to the debate in the Atlantic, Stephen Marche points out that the trouble begins with the language we use describe our devices. To explain how the Magic 8 Ball “works”, I said that we “ask it” a question and that “it gives” us an answer. Likewise, Marche notes, the developers of language models tell us that they exhibit “impressive natural language understanding.” He warns against this kind of talk, citing a Google exec.

“I find our language is not good at expressing these things,” Zoubin Ghahramani, the vice president  of research at Google, told me. “We have words for mapping meaning between sentences and objects, and the words that we use are words like understanding. The problem is that, in a narrow sense, you could say these systems understand just like a calculator understands addition, and in a deeper sense they don’t understand. We have to take these words with a grain of salt.”

STEPHEN MARCHE, “Google’s AI Is Something Even Stranger Than Conscious,” The ATLANTIC, june 19, 2022

If you read that just a little too quickly you might miss another example of the way language misleads us about technology. “You could say that these systems understand just like a calculator understands addition,” Ghahramani says. But calculators don’t understand addition at all! Consider a series of examples I offered on Twitter:

Would we say that an abacus “understands” addition? What about a paper bag? You put two apples in it. Then you put another two apples in it. Then you have a look and there are four apples in the bag. The paper bag knows how to add? I don’t think so. If you want something that uses symbols, consider a spring scale. You calibrate it with standard weights such that 1 unit on the scale is one unit of weight. You have increasing weights labeled 1, 2, 3, 4 etc. On the tray there’s a even a plus sign; you put two weights on it labelled “2” and the dial says “4”. Can the scale add? Of course not. A computer, likewise, is just a physical system that turns meaningless inputs into meaningless outputs. We understand the inputs and outputs. We imbue the output with meaning as the answer to a question.

Justin E.H. Smith wrote a thoughtful (as ever) piece about the incident on Substack. “Much of this speculation,” he suggests, “could be mercifully suspended if those involved in it just thought a little bit harder about what our own consciousness is actually like, and in particular how much it is conditioned by our embodiment and our emotion.” Note that this is basically the opposite of Coeckelbergh’s suggestion. Smith is telling us to remember what we know about sentience and consciousness from our own experience rather than get lost in the philosophy of consciousness and its lack of a “satisfactory definition” of its object. We know LaMDA is not conscious because we know its not sentient, and we know its not sentient because we know what sentience is and that it requires a body. And we know LaMDA doesn’t have one.

I note that Spike Jonze’s Her is now streaming on Netflix. When I first saw it, it occurred to me that it was actually just a story about love and loss told from inside a very clever satire of the absurdity of artificial intelligence. Descartes once said that he could imagine that he had no body. I’ve never believed him; I think he was was pretending. His “I” was literally no one ever … on philosophical stilts.

The Artifice of Babel

The universe (which others call the Library) …

Jorge luis Borges

Borges’s famous “Library of Babel” contains every possible 410-page book, 40 lines to the page, 80 characters to the line, 25 characters to choose from. William Goldbloom Bloch has written a fascinating study of its “unimaginable mathematics” in which we are told, among many other things, that it contains 251,312,000 books. To put this in perspective (if we can call it that), Goldbloom also informs us that stuffing the known universe with nothing but books would require only 1084 books. Perhaps we can put that into further perspective by considering that Queneau’s hundred thousand billion (1014) poems would fill the pages of 2.4 x 1011 410-page books. That, at least, is a possible arrangement of some of the 3.28 x 1080 particles that our real universe consists of.

Borges’s library, by contrast, is impossibly large. I agree with Goldbloom that it is in some sense “unimaginable” and that the wonder is that it is nonetheless quantifiable. We can put numbers on it but we simply cannot make sense of it. We can’t get our minds around it. While the library contains all the great works of literature that ever have been and ever will be written, it also contains a version of each with every imaginable combination of misprints. There are books with pages and pages of mostly As and others with mostly Bs. Borges tells us that there is no discernable order to the way the books have been arranged, which means that the odds of picking a random book off the shelf that contains the text of, say, Hamlet, are astronomically low. The vast majority of the books in this library will contain nonsense. In that sense, the library, which Borges calls “the universe” is absurd.

In his “intermittently philosophical dictionary,” Quine has proposed a simple way to understand this absurdity, a way to get our minds around its unthinkability, a way to see that Borges’s universe is not, properly speaking, a library at all and that what it contains are not, properly speaking, books. (To anticipate a later post, let’s say that they could not, properly speaking, be written.) He begins by reminding us what we’re dealing with:

The collection is finite. The entire and ultimate truth about everything is printed in full in that library, after all, insofar as it can be put in words at all. The limited size of each volume is no restriction, for there is always another volume that takes up the tale — any tale, true or false where any other volume leaves off. In seeking the truth we have no way of knowing which volume to pick up nor which to follow it with, but it is all right there.

Quine (1989), p. 224

The fact that the size of each volume is both arbitrary and unimportant suggests a way of reducing the amount of books. Instead of using every combination of 25 characters we could write all the books in Morse code, i.e., in sequences of dots and dashes. We now have 21,312,000 rather than 251,312,000 books. This will give us less information per page and therefore less information in each book. But, as Quine reminds us, “since for each cliff-hanging volume there is still every conceivable sequel on some shelf or other,” the library would still contain everything ever written by human hands (along with much, much more nonsense never seen by human eyes). We can go further.

There will be a great many books whose first or last halves are identical. So, if we split all the books in half, and discard all but one of the now identical ones, and then allow ourselves to serialize them when necessary to produce 410-page (and longer) works, no information is lost. And it is just as easy (i.e., it is impossible) to find what you’re looking for in this much smaller library (2656,000 books.)

Let us press on: the library could of course simply contain all possible pages of 3200 characters of Morse code (there are just 23200 such possible pages). But we can do better. Remembering Queneau’s sonnets, where each line is printed on a separate slip of paper, we can also imagine a library of all possible lines of 80 characters (only 280 lines), or even, as Quine now suggests, strips of seventeen characters. That gives us a mere 217 or 131,072 strips. By combining them any which way we can produce everything that Borges’s library contained. And, still, it will be as easy to produce Hamlet by these random combinations as it would be to find a reasonably legible copy of it in the chaos of the universal library.

Quine now puts a button on the thought experiment:

The ultimate absurdity is now staring us in the face: a universal library of two volumes, one containing a single dot and the other a dash. Persistent repetition and alternation of the two is sufficient, we well know, for spelling out any and every truth. The miracle of the finite but universal library is a mere inflation of the miracle of binary notation: everything worth saying, and everything else as well, can be said with two characters. It is a letdown befitting the Wizard of Oz, but it has been a boon to computers.

Quine (1989), p. 225

Perhaps you can see where this is going? Perhaps you briefly saw a Library of Tokens flash before your eyes? We’ll get there. For now, I merely want to point out how truly artificial the Library is. It cannot occur in nature. It is what happens when you put no natural constraints on a model and the let the possibilities multiply, if not endlessly, then at least perfectly, imagining the instantiation of every arbitrary combination of already arbitrary signs. It is not a natural language model and its books are not displays of intelligence.

See also: “Robot Writes” and “A Hundred Thousand Billion Bots”

A Hundred Thousand Billion Bots

A poem is a machine made of words.

William carlos williams

In 1961, Raymond Queneau, a co-founder of OuLiPo, the “Workshop of Potential Literature”, published a curious book with the perfectly literal title Cent mille milliards de poèms. It consisted of ten fourteen-line sonnets with an important twist. Each page was cut, spine to outer margin under each line. That is, the book was really a booklet of 140 slips that each contained one line of the poem. True to form, Queneau had even ensured that the rhyme sounds at the end of lines were the same, line for line, in each all ten poems. As the title suggests, the implications were rather astounding. By turning, not the pages, but the slips, different formally correct poems could be produced from each possible combination of lines. Even keeping the lines in the same order, as spine did, (so that line 5 in one poem could only be replaced with line 5 from another) this implied 1014 (a hundred million million, or a hundred thousand billion) unique poems. That’s more poems than anyone could ever read, write, or imagine in a lifetime.*

Image Credit: Toscano and Vaccaro 2020

In 1997, a French court ruled that it was illegal to publish the poem online. You can of course find it there anyway, but it’s interesting to read Wikipedia’s account (based on a 2001 article in the French magazine Multitudes.)

In 1997, a court decision outlawed the publication on the Internet of Raymond Queneau‘s Hundred Thousand Billion Poems, an interactive poem or sort of machine to produce poems.[8] The court decided that the son of Queneau and the Gallimard editions possessed an exclusive and moral right on this poem, thus outlawing any publication of it on the Internet and possibility for the reader to play Queneau’s interactive game of poem construction.[8]

“Copyright law in France”, Wikipedia

Espen Aarseth (1997, p. 10) has described it as a “sonnet machine”, a key example of what he called “ergodic literature,” from ergon-hodos (work-path), i.e., a text that requires “nontrivial effort” on the part of the reader to follow, a piece of writing that it takes work to read (p. see also Hayot and Wesp, 2004). The reader is faced, not with ten poems that can be read in or out of sequence by flipping the pages of a book in the ordinary way (a “trivial effort”, let’s say), but must actively choose between 10 slips of paper for each line, assembling a poem, and then making sense of the result.

One might argue — as Queneau’s son apparently did (see Vaver and Sirinelli, 2002, pp. 267-8) — that by removing this work of active reading — by having a machine effortlessly assemble a “random” poem from the available lines and presenting it seamlessly — violates the spirit of Queneau’s original work and therefore the moral rights of its original author (in particular, “the right to the integrity of the work”).

Queneau wrote ten poems and came up with a clever gimmick that turned them into a hundred thousand billion potential poems. He is the author of the original poems and the inventor of the gimmick. “A poem is a small (or large) machine made of words,” said William Carlos Williams; this one has ten small machines (in a sense Williams would recognize) that together make one that is larger (than he or any of us can imagine). One big robot to make a hundred thousand billion little bots! Is Queneau the writer of any but the ten original poems that the reader “works” upon, labors for? If not Queneau, who is? Are those poems written at all?

Let’s say we’ve got our work cut out for us!

____________
*In the spirit of OuLiPo we should probably do the math, right? (Queneau’s co-founder was teh mathematician François Le Lionnais. See Toscano and Vaccaro 2020 for more.) Let’s say it takes at least five minutes to read and appreciate a sonnet with a even a modicum of seriousness. That means it would five hundred thousand billion minutes to read “the whole book”. Divide by 60 and we get over eight thousand billion hours. Divide by 24 and we get a little over three hundred and thirty-three billion days. Divide by 365 and we could get it done in almost a billion years.

Robot Writes

 We live in an age of science and abundance. The care and reverence for books as such, proper to an age when no book was duplicated until someone took the pains to copy it out by hand, is obviously no longer suited to ‘the needs of society’, or to the conservation of learning. The weeder is supremely needed if the Garden of the Muses is to persist as a garden.

EZRA POUND, ABC of Reading

For a couple of years now on Twitter, David Gunkel has been challenging me to “think otherwise” about robot rights. I’m still not exactly sure what (or how) he thinks about robot rights, but, to his credit, my own thinking about the issue has become clearer during the time that I’ve engaged with his work. (Most recently, see his chapter, “The Rights of Robots”.) While the subject of robot rights interests me at a gut level as an amateur philosopher, I’ve come to realize that an important part of it is actually in my professional wheelhouse. This summer, I’ve given myself the project of getting some of my thoughts written down.

We all know that technology has had a profound effect on writing practices. Just in the last thousand years of human history, the transitions from manuscripts to moveable type, from typing to word-processing, from dictionaries to spell checkers, from style guides to grammar checkers, and from spell and grammar checking to autocompletion, have gradually, albeit with increasing intensity, transformed what it means to say that someone has “written” something. The day is already upon us when a properly trained language model like GPT-3 can produce a plausible blog post with very little human guidance. The day when it can be trained to produce a coherent, scholarly prose paragraph that meets my formal definition (if not my personal standards) is probably not far off. Indeed, I’d be surprised if at least one hasn’t already been produced.

This raises the question, “Can a robot be an author?” (I have said it can produce text, and this is undeniable, but can it write?) The question is analogous to questions about the “moral standing” of robots or their “status as persons” and can be made explicitly a “rights” issue by asking, under what circumstances might a machine be given the “moral right to be identified as the author” of a text?

If you or I write a poem, we can assert the moral right to be identified as the author of that poem. Now, a canso, for example, is a relatively simple structure with relatively simple purpose. Over a hundred years ago, writing about the troubadour’s predicament as it stood already eight centuries ago, Ezra Pound put it as follows:

After the compositions of Vidal, Rudel, Ventadour, of Bornelh and Bertrans de Born and Arnaut Daniel, there seemed little chance of doing distinctive work in the ‘canzon de l’amour courtois’. There was no way, or at least there was no man in Provence capable of finding a new way of saying in six closely rhymed strophes that a certain girl, matron or widow was like a certain set of things, and that the troubadour’s virtues were like another set, and that all this was very sorrowful or otherwise, and that there was but one obvious remedy.

Ezra Pound, “Troubadours—Their Sorts and Conditions”

I immediately imagine prompting GPT-3 with “Write six closely rhymed strophes that say that a certain girl, matron or widow is like a certain set of things, and that the troubadour’s virtues are like another set, and that all this is very sorrowful or otherwise, and there is but one obvious remedy.” With a little training (the canso provides a rich tradition of exemplars to be devoured by a “learning machine”), I’m sure GPT-3 could produce a poem equal to one I could produce on an average day (without the intercession of the Muses, let’s say). But who is the author of that poem? Was this poem actually “written”? Can a sufficiently trained language model claim authorship of the poem?

A language model can also be trained to summarize a journal article, or even a whole set of journal articles, and a few years ago Springer published a book about lithium batteries that was written by such a machine. Who (if anyone) is the author of that book? And under what circumstances would we grant an algorithm either (legal) copyright or (moral) standing as an author? Why would we do so? Why might we have to? What would it mean if we did?

These are the questions that I would like to explore over the summer. I’m expecting to learn something as I look into this (I’m already learning about autogressive language models and deep learning from people like Jay Alammar, for example), but I won’t keep you guessing about my views going in. Under no circumstances can a machine be an author. Robots can’t write. Writing is not merely text prediction, and scholarly discourse is not merely a language model. As Borges put it long ago, “a book is more than a verbal structure or series of verbal structures; it is the dialogue it establishes with its reader and the intonation it imposes upon his voice and the changing and durable images it leaves in his memory” (“A Note on (toward) Bernard Shaw”, Labyrinths, p. 213). Properly speaking, there can be no artificial writing because there is no artificial imagination. Imagination precedes artifice.