ChatLSD

I came up with this analogy about a week ago and tweeted it. I’ve since been thinking about it, and I think it holds up under scrutiny. So I thought I’d write a quick post.

Back in November I heard Gary Marcus describe large language models as “autocomplete on steroids” and I immediately free-associated to Terence McKenna’s “stoned ape” theory which, we might say, holds that human language is autocomplete on mushrooms. That’s a bit glib, but it suggests a more serious analogy. “PC is the LSD of the 1990s,” said the acid guru Timothy Leary once (he meant “personal computers” not “political correctness”) and today he would be even more correct to suggest that AI is our LSD. Now, I don’t actually mean that there is some interesting similarity between artificial intelligence and psychedelic experience. These are very different things. But what I’ve noticed is that my reaction to AI, and the range of reactions of my peers in academic writing instruction, is similar to the reaction to the introduction of LSD into mass culture in the 1960s.

To set the scene, let me state my reasonably educated but utterly non-expert view of LSD. (I’ve checked my facts in this post using Wikipedia.) Lysergic acid diethylamide was first synthesized by Albert Hofmann in 1938, and, in 1943, he discovered its psychedelic properties by accident. It was then used in psychiatric research and treatment in the 1950s and 1960s, while the CIA also began to weaponize it for “intelligence” purposes. (Maybe you can already see the pattern!) Its use in research made it widely available to academics and their students, and it was soon adopted by the counter-culture of the 1960s. Timothy Leary was one of its most ardent proponents, famously urging young people to “turn on, tune in, and drop out,” and was described by Richard Nixon as “the most dangerous man in America”. Despite being “non-addictive with low potential for abuse” and known to “induce transcendental experiences with lasting psychological benefit,” LSD was “scheduled” in 1968, making it illegal to use both medically and recreationally. But by this time it was, as it were, “too late”; Sgt. Pepper had already been, if you will, inspired and conceived, and, indeed, released to the public.

I’m sure you can imagine many clever ways to replace the people and events in this story to produce a pretty close approximation of the narrative around GPT. But, for me, the analogy suggests a number of important things.

  1. There’s no way back. Our students will increasingly use ChatGPT (and other language models) to inspire and, no doubt, produce their written assignments at university. I’m sure they’ll also find ways to use it “recreationally”.
  2. Prohibition will not work. Punishing students for using it where it seems relevant to them and is technically possible will merely undermine our authority. (I’ll leave it to you to work out the analogy to the “war on drugs” in its details.)
  3. Language models are perfectly safe. They are non-addictive and, so long as students continue to read and write on their own, will not damage their minds.
  4. The quality of artificially generated text will improve. As will its “potency”. That is, both the style and the content of the output of language models will become increasingly effective in all sorts of applications.
  5. Language models can both motivate and inspire students to produce writing they might not otherwise have come up with.

This may seem like an endorsement, which brings me to my final reason for liking the GPT/LSD analogy. Since the Summer of Bots, I have experimented extensively with GPT-3 and ChatGPT, and I have thought a great deal about it. And, if I had been a professor of philosophy or psychology in 1963, I think I might have experimented with LSD and mushrooms and DMT and other drugs, at least until they were forbidden. But…

6. I will not encourage students to use GPT to assist them in their writing projects.

That is, like LSD, while I’m comfortable with it myself, and while I grant that it has probably helped many artists and thinkers have experiences that have helped them produce interesting work to the benefit of themselves and our culture, I am not comfortable with the idea of integrating artificial intelligence into the process of developing the natural abilities of students to make up their minds, speak their minds, and write it down. Yes, I know that many students will use it anyway, and I’m not going to warn them off it, but I will not personally advise them to see how it might help solve their writing problems. I’m simply not sure I know how to use it to help them become better writers.

If they do use it to make the Sgt. Pepper’s Lonely Hearts Club Band of the college essay, that’s great! It was possible to enjoy that record without dropping acid too. But I am not going to engage explicitly with their experiments with artificial intelligence or suggest particularly effective ways of getting the most out of it. Like I say, I’m not sure I know enough about it.

Of course, if LSD had not been forced underground in 1968, there’s no telling what uses mainstream psychiatry and psychology, philosophy and poetry, would have found for it, and what place it would therefore now have in academic life. In some alternate universe, acid trips might today be familiar parts of the college (and even high school!) curriculum, as common as field trips! I truly hope that we make the best of artificial intelligence too. I hope we don’t let a moral panic awaken our prohibitionist impulses.

Let’s turn on, tune in, and stay calm.

2 thoughts on “ChatLSD

  1. I don’t pop by as often I as would like (as often as I should). But an interesting and well-written post, Thomas! …

    Here is another potential analogy between ChatGPT and LSD: Perhaps the brute forces of capitalism will ameliorate the problems of its widespread use. Because of the sheer amount of CPU power – and by extension, electricity – required to run systems like ChatGPT, I don’t think it’s likely that such models will stay free to use for long. What we are witnessing now could very well be the equivalent of standard street pusher marketing: The first “high” is for free.

    1. Yes, that’s an interesting twist, and I agree that this is something that will not be entirely “free”. However, the word “on the street” is that it will just be added to existing products (like Word and Google Docs, not to mention Bing and Google).

      But there is no doubt that the current push is to get us “hooked” on something. It’s worth remembering that until very recently Twitter did consider charging the user as a viable business model. They were selling our (hooked) attention to advertisers. (The old “Are you the customer or the product?” question.)

      Finally, to be fair to GPT — and to media in general — there is little evidence that digital technologies are literally “addictive”. They’re just way too easy sources of diversion. That’s also similar to LSD, as I understand it. One doesn’t become chemically dependent and the effect lessens with use. But it’s easy to quit if you finding yourself wasting too much time with it. I think the same will be true of AI.

      Still there is that famous Joan Didion anecdote.

Leave a Reply

Your email address will not be published. Required fields are marked *