Why You Can’t Cite ChatGPT

This is an issue that scholars and students are grappling with these days. Importantly, it is an issue that their peers and teachers, as their readers, are going to have to grapple with too. It’s as much a question about how to write scholarly texts as how to read them. How important is it to know whether a passage in a text you are reading was generated by a language model? By “a passage,” here, I mean anything from a few words or phrases to whole paragraphs and even essays. Do readers need to know what role an artificial intelligence played in composing them?

To many people today, large-language-model-(LLM)-based applications like ChatGPT occupy a confusing, perhaps “uncanny”, position between tools, like Word and Grammarly, and sources, like books and articles. On the one hand, they are a clearly machines that we set in motion to produce an output. This output is “bespoke” in the sense that it’s a unique product of each interaction with the machine, tailored to our particular purposes. That output can subsequently be “mechanically reproduced” in as many copies as you like; but the outcome of the interaction is an “original”. On the other hand, this output is distinctly a “text”. Not only can you mechanically reproduce it, you can copy it directly out of your LLM application and into your own writing, just as you might might copy phrases, sentences, and paragraphs into your document as quotations from published sources. Because we would always credit such published sources in those cases, using quotation marks and providing a reference, it feels wrong not to tell the reader we used an AI, not to say “who” put the words together in exactly that order. Even if we edit that order a little, we feel like we should cite it, like we would when paraphrasing.

We feel no such shame when using spell and grammar checking software, nor, usually, even when using translation software, which often serves mainly in the capacity of a dictionary, which we of course also only cite for special rhetorical effects, not every time we need to look up a word. (Indeed, imagine citing a thesaurus every time it “suggested” a different word than the one we had originally come up with.) Likewise, we only tell our readers what we learned from Google in special cases, such as when the first page of results or the sheer amount of hits itself provides a useful insight. That is, we don’t so much “cite” our tools as tell stories about our use of them. We invoke them narratively in our writing, not, strictly speaking, bibliographically.

We must remember that the output of a language model is merely a representation of the model’s prediction. It’s really just a very direct way of telling us what a likely response to your prompt is, presenting one very likely response rather than a distribution of probable responses. (If you want a sense of the probable alternatives, you simply ask again.) The output does not represent objects or facts in the real world, nor thoughts or ideas in the mind of any writer, only probable completions of strings of words. So there is no source to cite. You can, of course, tell the reader that you “asked ChatGPT” and it responded with a particular string of words. That might have a place in some argument you’re making. But if the words it gave you happen to capture exactly what you mean — even what you mean only just now, after having been “informed” by ChatGPT — then those words are now yours. There is simply no source to cite, because no author was expressing anything until you decided to make these words your own.

Failing to cite ChatGPT cannot be plagiarism because it does not create a source. (I’ll say more about this is in upcoming posts.) It merely suggests some words in a particular arrangement for you to do with as you please.

Our readers, importantly, assume that our writing is supported by all manner of machines and resources and they don’t want to hear a full accounting of our (often fumbling) application of them. Indeed, “oversharing” about our use of totally ordinary writing aids often undermines our ethos as scholars. It’s like citing Wikipedia or quoting from a “book of quotations”; it appears inexpert, amateurish.

In the future, language models will be “baked into” our word processors and, as I imagine that future, most people, including scholars, and certainly students, will be composing their sentences against a background of autocompleted “ghost” paragraphs that they can at any point simply accept (by pressing tab, for example) and move on to the next paragraph. These paragraphs will be generated by models that have been fine-tuned on the writer’s own writing, learning from each accepted autocompletion what sort of writing the writer prefers. It will look something like the following screenshot.

Screenshot from iA Writer in “focus mode”.
(Note that the ghosted text was in this case not generated by a language model but was written by me. This is an illustration of an imagined future application of this technology.)

It obviously makes no sense to cite my word processor for the completed text. Should I put quotation marks around the ghosted text and put “(iA Writer, 2023)” at the end of the paragraph. Surely, this would be nonsense. I am merely approving the language model’s adequate (if perhaps imperfect) prediction of what I was going to say.

In my last post, I mentioned in passing the embarrassing case of a university’s adminstrative department that cited ChatGPT as the author of an email they sent out to students. Someone thought it was a good idea to offer what they must have thought of as transparency by citing the output of the language model as a “personal communication”. This is complete nonsense, of course. A language model is a not a person and it does not communicate. Your interaction with ChatGPT is not actually a conversation, no matter how it may feel. Technically, and for academic purposes, it is no different than querying a database. I really hope we get over our embarrassment about using this powerful tool for improving our written outputs and stop thinking we have to tell our readers about the “contribution” a language model made to our writing.

Citing ChatGPT is as uninformative as saying you “found something on the internet”.

2 thoughts on “Why You Can’t Cite ChatGPT

  1. My takeaway from your thoughts is to put aside an anticipatory sense of frustration that I had about the idea that a student might give me an essay partly or entirely written by an LLM. I assumed that if I found out that was the case, I would feel frustrated about wasting my time commenting on it. But now thanks to your posts, I see that when a student takes on such material – text without an author – they are making it their own, and I can comment on it as theirs. If they then say, but an LLM wrote that, not me, I’ll say, but you adopted it and signed it as yours, so the comments are for you, not for the LLM.

    I think this is a version of your point about how we don’t identify the standard tools we write with.

    1. Yes! That’s exactly right. Tell the students that invoking their interaction with ChatGPT as an excuse for bad writing/ideas is poor form. If it didn’t help them write better, why did they use it?

      (Students sometimes also blame bad stylistic choices on Grammarly, of course. They shouldn’t do that either.)

      I would add, however, that we’re probably going to feel some frustration with take-home essays going forward, from a strictly pedagogical point of view. Maybe we can just happily give fully automatic essays the Cs they’ll unfortunately deserve. But it will grate against something in me if even a third of a students’ essays are not made by hand.

      That’s why I strongly recommend on-site examinations. Then we’ll know their words are truly their own at least in those situations.

Leave a Reply

Your email address will not be published. Required fields are marked *