Monthly Archives: August 2022

Robots, Rights, and Writers

for David and Josh

“…theoretical research in its academic form is the privileged place for these functions to be confused.” (Jacques Derrida)

I’ve learned a great deal this summer. I’m not sure I’ve resolved any of the core issues around artificial intelligence and academic writing, but I have reached a much better understanding, both of how these technologies work and of how our scholarly discourse approaches them. Intellectually, it has calmed my nerves a bit; I’m no longer too worried that robots will suddenly gain moral standing and legal rights in any revolutionary way. Professionally, however, I’m perhaps more worried than I once was; I have to admit that artificial intelligence is more advanced than I had thought, perhaps even than I had thought possible. I’m still confident that it will never be conscious (or even sentient), but it is increasingly able to seem so. In twenty years, I may very well be out of a job. Fortunately, I’ll also be retiring. I’m too old to learn to code!

Anyway, I wanted to make a few quick closing remarks before leaving this topic for a while and getting back to the ordinary business of writing academically. I hope you’ll grant, as I said at the start of the summer, that language models are nominally in my wheelhouse, but I understand if, as a regularly reader of this blog, you think I’ve been off on a bit of a tear. I promise that the next few months will be devoted to the art of learning, the craft of research, and the altogether human pleasures of writing. In any case, here are three quick paragaphs about robots and writing on the way out, each organized around a sometimes subtle distinction that I’ll no doubt be thinking about far into the future.

Robots/Machines. All robots are machines, and not all robots are humanoid. What is the important difference? I think it’s that a robot seems to “serve” us; it exists in what looks like servitude. It’s a machine that interacts with us on a scale that feels “social” and this naturally evokes our sympathy. A machine that does something for us, i.e., something that we would otherwise have to do ourselves, gives us a different feeling, when we watch it work, than one that we do something with, i.e., one that we use to accomplish some goal. We drive, we vaccuum. Once the car drives itself or the vaccuum moves by itself we begin to identify with its predicament. Part of this has to do with the fact that it is doing its work within the same physical constraints as we do ours and with roughly the same urgency. It’s got the room to clean; there’s a speed limit to observe. That seems to be how we distinguish “robots” from mere machines. They occupy space and time on a human scale.

Rights/Rules. The key here is a sense of freedom. Rights give us what Daniel Dennett called “elbow room”, a space in which free will can operate meaningfully. Rules can, of course, be broken, but that word itself suggests that you either follow them or you don’t work. (I’ve always liked the irony of “working to rule” as a form a labor unrest.) We can have rights and enjoy the freedoms that they suggest without ever invoking them. Thinking about the reaction of the state to people like Julian Assange and Edward Snowden, a dark thought once occurred to me: if you want your privacy you’ll have to keep it like a secret. Rights, like privacy, exist in so far as they are respected, while rules must be enforced, like a secret must be guarded. We could imagine rules that make us do only what we’d want to do anyway; but we’d rather simply have the right to do those things. It just feels right.

Writers/Authors. This one has perhaps been talked, if you’ll pardon a little joke, to death. What is an author? What is writing? What is the future of the book? These questions have been addressed from every angle by Barthes, Foucault, Derrida, Inc. & Co. Etc. “An author performs a function,” said Barthes, making a useful distinction, “a writer, an activity.” For the author language is constitutive, for the writer, it merely supports a practice. For the author, language is an end in itself; for the writer, it is a means. I may not be getting that exactly right, but it’s this sort of distinction we’re talking about. For me, it’s all about authority. The author has rights, moral rights, not just legal ones. The writer gives us information but the author takes responsibility. While I’m loth to admit it, “writer” almost becomes a pejorative. Indeed, the more I think about it, this distinction is probably moot in regards to robot rights. Robots, I would say, can’t be authors; they can’t even write.

It has been suggested a number of times during these discussions on Twitter that I’m a poor scholar. I like to think I have cobbled together a workable philosophy over the past thirty years (since I was an undergraduate in that strange trade), but it is true that I don’t approach machine learning, artificial intelligence, and the rights of robots with the same, let us say, “discipline” as people like David Gunkel and Josh Gellers. (Perhaps this paragraph should be rehearsing the distinction erudite/dilettante.) I hope, however, that I have acquitted myself as a plausible “educated layperson”; i.e., the sort of citizen who might one day have to make an “informed decision” about the rights of our robots or the governance of our machines. Perhaps I have managed only to be, as I have described myself in another context, a legitimate peripheral irritation. So be it. Whatever the future may hold, for now AI is certainly a subject on which we may hone our natural intelligence.

_________

This is the last post in a series of fifteen that started with “Robot Writes” back in June, and proceeded through “A Hundred Thousand Billion Bots,” about Raymond Queneau’s famous book, “The Artifice of Babel,” about Borges’s famous library, “Sentience on Stilts,” about Blake Lemoine’s infamous claim for LaMDA, “The Anxiety of Artifice,” about the existential dread of machines, “An Infamous Device,” about the Tower of Babel, “Automatic Sensemaking” and “Are Language Models Deprived of Electric Sleep,” two experiments with GPT-3, “The Automatic C,” a reflection of the ability of machines to pass exams, “Handwriting,” an attempt to recover my wits, “Do Transformers Desire Electric Rights,” an attempt to answer a challenge from Steven Marlow, “Subject-of-a-Text,” a reflection on animal rights, “I Am the Text. The Text is Me. (Or, There Is Nothing Outside the River),” a close analysis of the “personhood” of the Whanganui River, and, “The Virginia Incident,” which is almost a deconstruction of the “rights” of delivery robots in that commonwealth.

The Virginia Incident

with apologies to Robert Ludlum

In 2017, the Commonwealth of Virginia passed a law giving “all the rights and responsibilities applicable to a pedestrian” to delivery robots. In “The Rights of Robots”, David Gunkel invokes this law to show that, “Rights does not automatically and exclusively mean human rights.” This will not, of course, be news to anyone who is familiar with animal rights or the rights of nature, or, indeed, to someone familiar with corporate rights, which is perhaps the most common form of non-human legal personhood. Indeed, David is careful to note that “In granting [pedestrian] status and the rights and responsibilities that go with it to personal delivery robots, the State Legislature was not seeking to resolve or even address the big questions of robot moral standing or AI/robot personhood.” But on Twitter, David and, especially, Josh Gellers are often a bit more direct, citing the law to argue that “we already have robots with legal rights”.

I have had many stimulating exchanges with them over this law, and I thought now would be a good time to bring the arguments together in a more coherent form. To make sure we’re all starting on the same page, and with the issue clearly in view, I want to begin by quoting the key sentence in the Virginia law and summarizing in the simplest possible terms the interpretations that are at issue. §46.2-908.1:1 of the Code of Virginia states:

D. Subject to the requirements of this section, a personal delivery device operating on a sidewalk or crosswalk shall have all the rights and responsibilities applicable to a pedestrian under the same circumstance.

From this, David and Josh conclude that robots have rights in Virginia. At first pass, this may seem simply like a matter of reading the law, which says, explicitly, “… a personal delivery delivery device … shall have … rights …” But I want to argue that the ellipses are more significant than David and Josh think. I think the “requirements of this section” do more work than they recognize, as do the stipulations that the devices must be “operating” and “under the same circumstance” as a pedestrian.

We can sum up our disagreement by contrasting two interpretations of the law, call them the Gunkel-Gellers interpretation (GG) and the Thomas Basbøll interpretation (TB).

(GG) Delivery robots have rights to operate in Virginia. [Please see the updates at the bottom of this post]*

(TB) People (persons) have rights to operate delivery robots in Virginia.

If you think the difference between these two interpretations is too subtle to bother with, you won’t find this post very interesting. Otherwise, get ready to get into some legal weeds. (For a good primer on the subject, see Cindy Grimm and Kirsten Thomasen’s “On the Practicalities of Robots in Public Spaces”.)

David would like us to use Hohfeldian analysis when considering the assignment of rights. Now, I’m no expert on Hohfeld, but my attempts to engage with David’s position have forced me to try to understand the terminology of “incidents” and “correlatives” that he uses to frame his own discussions. It is presented in his book and the recent paper, and I can recommend Thompson’s 2018 celebration of the centenary of Hohfeld’s framework (Laws 7: 28) as well. The basic idea is to analyze “molecular” rights into more fundamental “incidents” that have “opposites” and, as I will be emphasizing here, “correlatives”. To take the simplest case (“a right in stricto sensu“) if one person has a right to something another has a duty not to prevent it. For example, a pedestrian has a right to cross on a green light and a driver has a duty not to run them over. Here the right to cross is the “incident” and the duty to stop is the “correlative” in what is called “jural relation” between the driver and the pedestrian. Crucially, for Hohfeld, only persons can be the subjects of duties and rights.

We can see the importance of the jural relation between persons once we include what the Virigina Code calls “vehicles” and “devices”. Consider, for example, a traffic incident (not in the Hohfeldian sense, strictly speaking, mind you!) between a car and a bike. Suppose that the bike has the right of way and the car has the duty to yield. It is obviously not the machines (the bicycle and the automobile) that have here been assigned rights and duties but their “operators” (the rider and the driver). In Hohfeldian terms, the driver may “violate” their duty by “invading” the right of the rider, literally, invading the bike lane, for example. Neither the car nor the bicycle are (juridically) invaded or violated (although either may be seriously damaged) because neither are persons and therefore the subjects of the correlative rights and duties.

Likewise, the interactions between cars, bicycles, and pedestrians is governed by the law, which specifies the jural relations between them, i.e., stipulates their rights and duties (or, as the Virginia Code puts it, “responsibilities”). Cars cannot drive on the sidewalks and pedestrians must not obstruct traffic by jaywalking, on pain of violating their duties to each other. This is all ordinary, trivial stuff. But it will become important when we consider the question of whether an “incident” between a delivery robot and automobile or bicycle can be imagined. Can the delivery robot’s “pedestrian rights” find correlative duties in cars and bikes? As I pointed out, they would have to find them in the people who operate them, and that’s our first clue to what the law is really saying.

Remember how we learned that Te Awa Tupua is not identical with the Whanganui River; “it is merely river-kindred”? The same is true of what the Virginia Code calls a “personal delivery device operating”; it is not simply identical with the six-wheeled robot that is bringing you your take-out order. Like Te Awa Tupua, an operating personal delivery device, is a unified whole consisting of mechanical, human and, if I may, metaphysical elements. Unlike a human body, it has no rights expcept when it operating legally “on a sidewalk or crosswalk”. To be sure, a human body only has pedestrian rights “under the same circumstance”; but what makes us different is that we are capable of bearing other rights under other circumstances. We are persons.

Source: Wikipedia

How does the PDD become capable of bearing rights when operating? It does so by legally becoming a “device” in the care of a “person”, just like a bicycle. It took me some close reading of the law to understand this fully, but I’m now certain that the best way to understand the “rights” of personal delivery devices in Virginia is figuratively. Just as a pedestrian carrying a wooden plank along the sidewalk has a duty (correlated with the rights of other pedestrians) that reaches all along the length of the plank, so too does the operator of the PDD have a right to have the robot cross an intersection and not have it be run over by a car. The operator, in this incident, does not have the right to not be run over themselves (though I suppose we do have that right when sitting in a control room far from traffic too) but the driver of the car does have a duty to the operator not to run over the PDD. A violation of that duty is a invasion of the operator’s right, not the robot’s. To think otherwise is to think that the plank, not the person carrying it, invaded the rights of the pedestrian he smacked jape-like on the back of the head.

“Okay, that may all sound very sensible,” I hear you saying, “but is that actually what the law says? Doesn’t it clearly say that the device has rights?” Well, let’s have a look. First, I discovered that the law does actually talk about the “rights” of bicycles in very similar ways, and in that case we’d obviously read the expression figuratively — i.e., as metonymy for the rights of the rider of the bike to move the bike in certain ways and the duty to move it in certain other ways — not as literally making a bike the subject of legal rights. In §46.2-904.1, covering electric power-assisted bicycles, we read the following:

A. Except as otherwise provided in this section, an electric power-assisted bicycle or an operator of an electric power-assisted bicycle shall be afforded all the rights and privileges, and be subject to all of the duties, of a bicycle or the operator of a bicycle. An electric power-assisted bicycle is a vehicle to the same extent as is a bicycle.

Notice that if we take this literally, the law is referring to the rights of both electric and traditional bicycles, saying one shall have the all the rights and duties of the other, and therefore implying, again, if we take it literally, that these devices are rights-bearing subjects. That’s obviously nonsense, so I think we can safely conclude that the law does, at least sometimes, want to be taken figuratively.

Indeed, as I suggested in my last post, Derrida might point out that there is nothing outside of the law. Or, rather, the meaning of any particular sentence in the law must be traced through the play of signification that operates in the context of the entire law, and the context of the application of that law. We must make “the effort to take this limitless context into account, to pay the sharpest and broadest attention possible to context, and thus to an incessant movement of recontextualization” (Limited, Inc., p. 136). We can see how this works already in the core sentence I quoted at the outset.

D. Subject to the requirements of this section, a personal delivery device operating on a sidewalk or crosswalk shall have all the rights and responsibilities applicable to a pedestrian under the same circumstance.

Well, what are the “requirements of this section”? The context of the section provides, first of all, strict conditions under which a PDD may operate. I’ll leave out the more mundane of these and emphasize only the ones I think matter for the present discussion.

B. A personal delivery device shall:
4. Include a unique identifying device number;
5. Include a means of identifying the personal delivery device operator that is in a position and of such a size to be clearly visible;
E. A personal delivery device operator shall maintain insurance that provides general liability coverage of at least $100,000 for damages arising from the combined operations of personal delivery devices under a personal delivery device operator’s control.
F. Any entity or person who uses a personal delivery device to engage in criminal activity is criminally liable for such activity.

In other words, a PDD doesn’t have any rights unless it is clearly and distinctly associated with an operator. Indeed, if we look at the broader context of “this section”, namely, the definitions that are provided at §46.2-100, there seems to be no doubt about the importance of “operators” to “devices”.

Except as otherwise provided, for the purposes of this title, any device herein defined as a bicycle, electric personal assistive mobility device, electric power-assisted bicycle, motorized skateboard or scooter, moped, or personal delivery device shall be deemed not to be a motor vehicle.

This makes it pretty clear what “sort of thing” we’re dealing with here. Moreover,

“Personal delivery device” means a powered device operated primarily on sidewalks and crosswalks and intended primarily for the transport of property on public rights-of-way that does not exceed 500 pounds, excluding cargo, and is capable of navigating with or without the active control or monitoring of a natural person. Notwithstanding any other provision of law, a personal delivery device shall not be considered a motor vehicle or a vehicle.

That is, a PDD is a device that is in fact “operated”. By whom?

“Personal delivery device operator” means an entity or its agent that exercises direct physical control or monitoring over the navigation system and operation of a personal delivery device. For the purposes of this definition, “agent” means a person not less than 16 years of age charged by an entity with the responsibility of navigating and operating a personal delivery device. “Personal delivery device operator” does not include (i) an entity or person who requests the services of a personal delivery device to transport property or (ii) an entity or person who only arranges for and dispatches the requested services of a personal delivery device.

When I discuss this with David and Josh, we generally end things here, at an impasse that can be expressed as a challenge. “See you in court,” as David put it in the case of Te Awa Tupua. The choice between the GG and TB interpretation of §46.2-908.1:1F ultimately comes down to a prediction about future court proceedings.

That is, if you believe, after reading the law as closely (or more closely) than I have, that robots do literally “have rights” on the sidewalks and in the crosswalks of the streets of Virginia then you believe that one day a court decision will turn on this. If, by contrast, you believe, as I do, that the law does not give rights to the robots, but to the companies that operate them, that the correlative rights and duties are distributed among ordinary legal persons — drivers, riders, walkers, and the “entities and their agents” (companies and their employees) that operate delivery robots — that move about in the traffic of the commonwealth, then you believe that no such case will ever arise. You believe that if someone were to hold a robot, not an operator, responsible for an accident, or a lawyer were to bring suit on behalf of a robot, not the company that owns it, the case would be immediately dismissed. I would have to admit that such an “incident” would puzzle me even so. But, as I hope I have shown, it would not, in any case, be Hohfeldian.

______
*Update (23/08/22, 15:30): David has objected to my original formulation of his interpretation. Even with “to operate” struck out, I’m not sure he’s satisfied. I’m currently trying to work out something that he and Josh can accept. Will update again when I’ve reached an agreement with them.
Update (24/08/22, 12:30): It does not look like an agreement will be possible (see David’s tweet and the associated thread). I stand by my own analysis of the law, but I’m no longer sure I understand David’s or Josh’s. It turns out that they don’t, as I had supposed, want to claim that delivery robots “have rights” in Virginia in any straightforward sense. To be clear: I don’t think they have rights in any sense.

Final update (28/08/22, 16:40): After some back-channel correspondence with David, we have arrived at a statement of the issue that we can both endorse:

  • According to David, the Virginia law extends the rights of pedestrians to delivery robots operating in the state. On his reading, the device itself has the same rights and responsibilities as a pedestrian when moving around on the sidewalks and in the crosswalks of the streets of Virginia. To put this in Hohfeldian terms, ‘jural relations’ exist between drivers and robots in traffic, just as they exist between drivers and people in traffic.
  • On my reading, the law does not extend rights to the delivery robots themselves; it only gives rights to the owners of delivery robots to operate these devices. The law requires others to respect the robots as if they were pedestrians and requires operators to ensure that the robots follow the same rules as pedestrians. Putting this in Hohfeldian terms, there is no ‘jural relation’ between drivers and robots in Virginia; rather, the relevant rights and duties govern the relation between the driver of a vehicle and the operator of a delivery device.

I Am the Text. The Text is Me. (Or, There Is Nothing Outside the River.)

with apologies to Te Awa Tupua

Like animal rights, the rights of nature are often invoked as a model for thinking about the rights of robots. In March of 2017, for example, the Parliament of New Zealand “confer[ed] a legal personality on the Whanganui River” as part of a settlement with the Maori tribes that traditionally lived along its banks. Scholars like David Gunkel and Josh Gellers frequently cite this act as a key moment in the history of “rights for non-humans” and, therefore, an opening to the possibility of granting rights to machines. If a river can be a person, and the subject of rights, why can’t a robot or other artificial entity?

Source: Google Maps

The short answer is that the Whanganui river, in the sense that we who are oppressed by the Western metaphysics of presence understand it, was not granted personhood by the Te Awa Tupua Act of 2017. Te Awa Tupua is not just a river and its rights belong to a spirit, what the Romans called a genius loci. From the point of view of the Western legal tradition, Te Awa Tupua is basically a corporation tied to a specific geography, much like an incorporated town. The river itself, which is to say, the watercourse through the landscape that we Westerners too easily point to and call “the” Whanganui, does not have any rights according to the law.

The purpose of this post is to think some of these issues through. As usual, I’ll try to bring the discussion around to the possibility that an artificial entity could be an “author”; that is, I will try to see whether Te Awa Tupua can provide a model for a “legal personality” for, say, GPT-3, giving it rights of authorship. The answer is not quite no, but also not quite (and you’ll have to pardon me for not killing this darling of a pun) the watershed moment for “robot rights” that Josh and David imagine.

Obviously, I’m not here challenging the legal personhood of Te Awa Tupua, nor suggesting that it shouldn’t have any rights. The Act clearly says that it does and, as we’ll see, I appreciate the legal brilliance of the settlement. The question I want to address is, What — or, indeed, who — has those rights? Already back in 2012, when the agreement was first reached, the Ministry of Treaty Negotiations made clear that the river would be recognized as a person “in the same way a company is, which will give it rights and interests.” When the act was passed, this idea was stressed again. “I know the initial inclination of some people will say it’s pretty strange to give a natural resource a legal personality,” said Chris Finlayson, who had negotiated the settlement. “But it’s no stranger than family trusts, or companies or incorporated societies.” As I want to show in this post, this interpretation is borne out by the act itself, though, like I say, couched in strangely metaphysical language.

Let’s begin with the sentence in the law that Josh and David wish to emphasize.

14(1): Te Awa Tupua is a legal person and has all the rights, powers, duties, and liabilities of a legal person.

This does indeed seem pretty unambiguous. But let’s pause for a moment to notice that it does not say that the Whanganui River, which is the official name of the watercourse and what you will find on a map, is a legal person. Rather, it says that an entity called Te Awa Tupua, which is what the Maori call it, is a legal person. You don’t have to be Willard Van Orman Quine to find this a little interesting. What is this entity that the law refers to? Is it just the Whanganui River? Or is it something else?

As it happens, Quine wrote a paper many years ago in which he worked through in elabaroate detail how it is or isn’t possible to step into, or rather, refer to the same river twice.

The introduction of rivers as single entities, namely, processes or time-consuming objects, consists substantially in reading identity in place of river kinship. (“Indentity, Ostension, and Hypostasis”, in From a Logical Point of View, p. 66)

As you can imagine, we’re going to end up making a great deal of this poetic notion of “river kinship”. For Quine, for now, all turns on the profound ambiguity of the apparently simple act of pointing to something.

Such ambiguity is commonly resolved by accompanying the pointing with such words as “the river”, thus appealing to a prior concept of a river as one distinctive type of time-consuming process, one distinctive form of summation of momentary objects. (p. 67)

Until, that is, we know what the Maori mean when they say “Te awa tupua,” we don’t know what sort of thing has been declared a person in New Zealand law. They may as well say “gavagai!” Fortunately, we can read the law to find out; specifically, we can read the two sections before the one I have already quoted.

(12) Te Awa Tupua is an indivisible and living whole, comprising the Whanganui River from the mountains to the sea, incorporating all its physical and metaphysical elements.

Already here we can see that Te Awa Tupua is more than the river; it “incorporates all its physical and metaphysical elements” to constitute an “indivisable and living whole”. But that is not all; this whole also has an identifiable essence:

(13) Tupua te Kawa comprises the intrinsic values that represent the essence of Te Awa Tupua, namely—

Ko Te Kawa Tuatahi

13 (a) Ko te Awa te mātāpuna o te ora: the River is the source of spiritual and physical sustenance:

Te Awa Tupua is a spiritual and physical entity that supports and sustains both the life and natural resources within the Whanganui River and the health and well-being of the iwi, hapū, and other communities of the River.

Ko Te Kawa Tuarua

13 (b) E rere kau mai i te Awa nui mai i te Kahui Maunga ki Tangaroa: the great River flows from the mountains to the sea:

Te Awa Tupua is an indivisible and living whole from the mountains to the sea, incorporating the Whanganui River and all of its physical and metaphysical elements.

This basically restates the definition already set out in section 12, but the next two subsections are crucial for our understanding of how Te Awa Tupua, and not just the Whanganui River, can be a legal person.

Ko Te Kawa Tuatoru

13 (c) Ko au te Awa, ko te Awa ko au: I am the River and the River is me:

The iwi and hapū of the Whanganui River have an inalienable connection with, and responsibility to, Te Awa Tupua and its health and well-being.

Ko Te Kawa Tuawhā

13 (d) Ngā manga iti, ngā manga nui e honohono kau ana, ka tupu hei Awa Tupua: the small and large streams that flow into one another form one River:

Te Awa Tupua is a singular entity comprised of many elements and communities, working collaboratively for the common purpose of the health and well-being of Te Awa Tupua.

That is, the “indivisible whole” called Te Awa Tupua includes the human communities that traditionally reside, not just near the banks of the river that flows from the mountains to the sea, but, in all the lands nurtured by the “small and large streams” connected to it. These human communities (iwi) are “inalienably” connected to it.

Indeed, right after this metaphysical entity is given personhood in law, the terms of its representation are also spelled out:

14(2) The rights, powers, and duties of Te Awa Tupua must be exercised or performed, and responsibility for its liabilities must be taken, by Te Pou Tupua on behalf of, and in the name of, Te Awa Tupua, in the manner provided for in this Part and in Ruruku Whakatupua—Te Mana o Te Awa Tupua.

And what, then, is Te Pou Tupua?

18(1) The office of Te Pou Tupua is established.

18 (2) The purpose of Te Pou Tupua is to be the human face of Te Awa Tupua and act in the name of Te Awa Tupua.

18 (3) Te Pou Tupua has full capacity and all the powers reasonably necessary to achieve its purpose and perform and exercise its functions, powers, and duties in accordance with this Act.

It seems pretty clear to me that this settlement is an ingenious way of constructing an entity that respects both indigenous and Western conceptions of community. “It is wrong to say that they are identical,” Quine might say from his “logical point of view” (cf. p. 66), “they are merely river-kindred.” From the point of view of the law and the legal system Te Awa Tupua is a kind of trust or corporation, as Finlayson puts it, but from the point of view of the iwi that inhabit it, it is a living being of which they too are a part. “I am the river,” they say. “The river is me.” The settlement has managed to, literally, put a “human face” on this natural relationship for the purpose of adminstering it with the current system of rights, while at same time “incorporating” (i.e., embodying) its “metaphysical elements”. Here David might invoke Derrida:

One of the definitions of what is called deconstruction would be the effort to take this limitless context into account, to pay the sharpest and broadest attention
possible to context, and thus to an incessant movement of recontextualization. The phrase which for some has become a sort of slogan, in general so badly understood, of deconstruction (“there is nothing outside the text” [it n y a pas de hors-texte]), means nothing else: there is nothing outside context. (Limited, Inc., p. 136)

In a sense, yes, the idea that “there is nothing outside the river” deconstructs the Western metaphysics of presence (which would ignore even Heraclitus’s warnings about stepping into rivers twice). But, as Wittgenstein would point out, this deconstruction nonetheless “leaves everything as it is,” from the mountains to the sea. After all, Bob Dylan’s honesty notwithstanding, there is nothing outside the law either.

This is all bit too fast and loose, I know.* I need to tighten up this analysis and bring its metaphysical elements into sharper focus. (There is much more to be done with both Quine and Derrida.) But I’m beginning to see the outline of an argument for robot rights, specifically, the rights of authorship for large language models like GPT-3. Fortunately, just as the Te Awa Tupua Act doesn’t give any rights to the merely physical process that is temporarily represented on the map as an object called the Whanganui River, this argument would never give rights to an algorithm or a database itself. It would always require an act of “incorporation”, a legal embodiment, and, yes, a “human face” to represent it. We already know how to speak of an author’s “body of work” and how to govern it. “I am the text,” the author says. “The text is me.” But the author dies, as Barthes pointed out, and a reader is born who can say the same. Maybe the future of text production is not so radical after all.

I hope you find this as invigorating as I do. I’ve decided to continue thinking about this by moving on to another law that David and Josh like to invoke, namely, the law governing personal delivery robots in Virginia. This analysis of the personhood of Te Awa Tupua provides a good model for the work that needs to be done to understand the precise sense in which those robots “have the rights of pedestrians” on the sidewalks of Norfolk. When I’ve worked that out, maybe, finally, I will be able to say precisely why I think robots can’t write.

Maybe two or three posts more. Then I’ll head off for a late summer vacation. And then, I promise, I will stop pretending to be a philosopher and legal scholar and return to the subject of how human beings can become better writers in the here and now.

______

*Update: After reading it, Josh expressed his disappointment with the scholarship behind this humble post on Twitter. If you want a sense of how Te Awa Tupua is discussed by scholars of environmental law, I can now recommend three good pieces.

Christopher Rodgers’ “A new approach to protecting ecosystems: The Te Awa Tupua (Whanganui River Claims Settlement) Act 2017” in the Environmental Law Review 19(4) seems to be the obligatory reference (Josh cites it in his book, Rights for Robots, on p. 127). It offers a good summary and analysis of the facts. Michelle Worthington and Peta Spender’s “Constructing legal personhood: corporate law’s legacy” in the Griffith Law Review 30(3) and Seth Epstein, Marianne Dahlén, Victoria Enkvist, and Elin Boyer’s “Liberalism and Rights of Nature: A Comparative Legal and Historical Perspective,” forthcoming in Law, Culture and the Humanities, both use the case in broader analyses of corporate and natural rights. All three are, as far as I can tell, a little more impressed with the legal novelty of the Te Awa Tupua settlement than I am, but I remain convinced that it is not the ontological innovation that would be needed to extend rights to machines in any radical way. That’s, of course, something I’ll need to return to.

Update (25/09/22): David Gunkel recently drew Visa Kurki’s A Theory of Legal Personhood to my attention, which presents a very similar argument about the Whanganui River in chapter 4.

Subject-of-a-Text

for Estrellita

The case for robot rights is often made by analogy to the case for animal rights and the case for the rights of natural entities like rivers and mountains. Josh Gellers is a strong proponent of these analogies, as is David Gunkel, and in my engagements with them on Twitter they often challenge me to apply whatever principles I want to use to exclude robots from moral consideration to these other entities which, they point out, have already been granted a variety of rights in many jurisdictions. Rights for non-humans are already here, they declare. Why not let robots into the company of rights-bearing subjects too?

It’s a good challenge and one that is worth facing. Just so we’re on the same page I should make clear that I believe that animal rights and the rights of nature are today assigned within reasonably coherent ethical and legal frameworks. I have looked at the cases they have suggested and, though they seem to understand these cases a little differently than I do, I basically agree with the way rights, as I understand it, have been assigned there. The coherence of these frameworks, however, cannot, as I see it, be extended to robots or other artificial entities. To put it in David’s terms, what we may think about animal rights and the rights of nature need not compel us to “think otherwise” about robot rights.

I’m going to take the two cases one at a time, animals in this post, and rivers and mountains without end in the next, in both cases using my now favorite artificial intelligence, GPT-3, to represent the analogous robot rights candidate. Since GPT-3 generates text, I am going to consider the somewhat narrow question of whether it can have “the moral right to be identified as an author”. If, for example, someone gets GPT-3 to generate a blogpost, the moral right of GPT-3 to proper attribution (if it had this right) would be violated if the text was either not attributed at all or attributed to someone else. This would be the case independent of any merely legal copyright violation, since a copyright can unproblematically be owned by people and entities other than the original author of a text.

How is the right of attribution similar to a right that an animal might have? The analogy I want to explore is suggested by the work of Tom Regan*, who, in The Case for Animal Rights, has argued that many animals are “subjects-of-a-life” and, as such, are also proper subjects of rights. If an animal is capable of feeling both distress and loneliness, for example, it has a right to be free from unnecessary harassment and forced isolation, both of which can be understood as forms of violence. That is, the rights of the animal are violated by causing it either physical or emotional harm. On this view, deliberately subjecting an animal to suffering or depriving it of the company of those it loves would be considered an act of cruelty.

As it happens, just as I was finishing the first draft of this post, Josh pointed me to a perfect case. Earlier this year, it seems, a final judgment was handed down in the Constitutional Court of Ecuador in the case of Estrelitta, a chorongo monkey that was taken by authorities from her human home, where she had lived for 18 years with a woman she considered her mother, and taken to a zoo where she died of stress after a few weeks. The judgment goes to great lengths to consider whether the animal’s rights (not merely those of the woman Estrellita was living with) were violated and even cites Regan’s seminal work on “animals as moral beings and subjects of life” (p. 26, n83). I have not yet looked closely at the case, which is heartbreaking on its face, but it seems like a very correct judgment. This was not merely a tragedy; it was an injustice.

In The Case for Animal Rights, Regan details what it means to be the subject-of-a-life:

[It] involves more than merely being alive and more than merely being conscious. … individuals are subjects-of-a-life if they have beliefs and desires; perception, memory, and a sense of the future, including their own future; an emotional life together with feelings of pleasure and pain; preference- and welfare-interests; the ability to initiate action in pursuit of their desires and goals; a psychophysical identity over time; and an individual welfare in the sense that their experiential life fares well or ill for them, logically independently of their utility for others and logically independently of their being the object of anyone else’s interests. Those who satisfy the subject-of-a-life criterion themselves have a distinctive kind of value – inherent value – and are not to be viewed or treated as mere receptacles. (P. 243, quoted from Wikipedia)

I think the moral rights of authors can be similarly rooted in the “subjecthood” of the author. I have previously compared what Hemingway called the “writer’s problem” and what Barthes’ called the “problematics of literature”. “A writer’s problem does not change,” said Hemingway. “He himself changes, but his problem remains the same. It is always how to write truly and, having found what is true, to project it in such a way that it becomes a part of the experience of the person who reads it.” Barthes put it this way: “Placed at the center of the problematics of literature, which cannot exist prior to it, writing is thus essentially the morality of form, the choice of that social area within which the writer elects to situate the Nature of his language.” In their very different ways, both situate the author of a text within an experience (one, you will note, that includes a social relation) and assert an explicitly moral claim that is grounded in the freedom the author enjoys.

The crucial question here, as Regan points out, is that it makes sense to ask “what is it like to be” an animal. I will add that there also something like being an author. An individual’s rights as an animal or author depend on this subjective experience, as a truth that can be projected (Hemingway) or as a nature that can be situated (Barthes). We can now ask whether this can ever be the case for a “generative pre-trained transformer”.

Obviously, I can’t answer that question definitively in a blogpost. But I will again cite (as I did at the start of the summer) Borges wonderful reminder that a book isn’t just a linguistic structure. In his “Notes on (toward) Bernard Shaw”, he starts with a list of fantastical notions from Raymond Lully’s “thinking machine” to Kurd Lasswitz’s “Total Library” (an idea he would famously explore himself) and then offers the following:

Lully’s machine, Mill’s fear and Lasswitz’s chaotic library can be the subject of jokes, but they exaggerate a propensity that is all too common: making metaphysics and the arts into a kind of play with combinations. Those who practice this game forget that a book is more than than a verbal structure or series of verbal structures; it is the dialogue it establishes with its reader and the intonation it imposes upon his voice and the changing and durable images it leaves in his memory. This dialogue is infinite … A book is not an isolated being: it is a relationship, an axis of innumerable relationships. (Labyrinths, p. 213-14)

This gesture at the infinite relationships that constitute a book is a nice set-up to my next post on the rights of nature. But do notice that, like Estrellita, who had a right to remain with her adoptive mother, a book (or, rather, its author, of course) has the right not to be “isolated” from the problematics of the literature in which it has taken its place. In order to read it, we must respect the morality of its form. In any case, even if we grant, as I do, that animals can have rights because they are the subjects-of-a-life, we do not need to grant that robots can have rights unless they, too, can be the relevant subjects of them. In the case of GPT-3, we must ask whether GPT-3 can “project its experience”, can “situate the nature of its language”, or, indeed, whether it can “impose its voice” on the memory of the reader. Is it capable of an infinite dialogue? Can it be the subject-of-a-text? I think not.

Tom Regan’s case for animal rights cannot be made for robot rights. But I’m sure that neither will he be allowed to have the last word on animals* nor will I be allowed to have the last word on robots. The dialogue, after all, is infinite.

_____
*I should make clear that I’m by no means an animal rights scholar. What I offer here is something I’ve learned mainly from Wikipedia. On Twitter, Josh reminds me that he covers Regan’s work in chapter 3 of his book. I haven’t revisited it for this post.

Do Transformers Desire Electric Rights?

On Twitter, Steven Marlow has asked me to justify the exclusion of current AI systems from our system of rights without invoking the fact that they’re not human or that they don’t have feelings. Josh Gellers seconded the motion, adding that it’s going to be a hard nut to crack. This post is my attempt to crack it. Though I do personally believe that one reason not to give robots rights is that they don’t have inner lives like we do, I will leave this on the side and see if I can answer Steven’s question on his terms. I’ll explain why, being what they are, they can’t have rights.

Keep in mind that, when thinking about AI, I am for the most part interested in the question of whether transformer-based artificial text generators like GPT-3 can be considered “authors” in any meaningful sense. This intersects with the robot rights issue because we know how to recognize and respect (and violate!) the moral and legal rights of authors. If an AI can be an author then an AI can have such rights. To focus my inquiries, I normally consider the question, Can a language model assert “the moral right to be identified as the author” of a text? Under what circumstances would it legitimately be able to do so? And my provisional answer is, under no circumstances would it be able to assert such rights. That is, I would exclude GPT-3 (a currently available artificial text generator) from moral consideration and our system of rights. I take Steven to be asking me how I can justify this exclusion.

Remember that I’m not allowed to invoke the simple fact that GPT-3 is not human and has no inner life. We will take that as trivially true for the purpose of this argument. “Currently excluded,” asks Steven, “based on what non-human factors?”

I do, however, want to invoke the fact that, at the end of the day, GPT-3 is a machine. We exclude pocket calculators from moral consideration as a matter of course, and I have long argued that the rise of “machine learning” isn’t actually a philosophical gamechanger. Philosophically speaking, GPT-3 is more like a TI-81 than a T-800. In fact, I won’t even grant that the invention of microprocessors has raised philosophical questions (including ethical question about how to treat them) that are any deeper than the invention of the abacus. All that has happened is that the mechanism and the interface have changed. Instead of operating it by hand, the calculation is automated, and instead of setting up the system with beads we have to count ourselves (and interpret as 1s, 10s, 100s, etc.), we can provide the inputs and receive the output in symbols that we understand (but the machine, crucially, does not). GPT-3 itself is just a physical process that begins with an input and mechanically generates an output.

It shouldn’t have rights because it has no use for them. It neither wants nor needs rights. Giving it rights would not improve its existence. (Following Steven’s rules, I’ll resist the temptation to say that it has no “existence”, properly speaking, to improve. I’ll just say that even if it did, or in whatever sense it does, giving it a right would not contribute to it.) I simply don’t have any idea how to give rights to an entity that neither wants nor needs them. Tellingly, it isn’t demanding any either.

In a certain sense, GPT-3 is excluding itself from our system of rights. It is simply not the sort of thing (to honor Steven’s rules I’m not going to say it’s not a person) that can make use of rights in its functioning. Human beings, by contrast, function better given a certain set of rights. We are constantly trying to figure out which rights are best for our functioning (what some people call “human flourishing”) and we certainly don’t always get it right. Sometimes we have to wait for people who don’t have the rights they need to also want them. Then they ask for them and, after some struggle, we grant them. Whenever we do this right, society functions better. When we get this wrong, social life suffers.

Hey GPT, do you want to play chess?

But none of these considerations are relevant in the case of robots or language models. There is just the question of making them function better technically. To put it somewhat anthropomorphically, in addition to more power, better sensors and stronger servos, robots don’t need more privileges; they just need better instructions. That’s what improves them. Giving them freedom isn’t going to make them better machines.

A good way to think of this is that machines don’t distinguish between their physical environment and their moral environment. They are “free” to do whatever they can, not want, because they want for nothing. A chess bot can’t cheat because it doesn’t distinguish between the physics of the game and its rules. It can’t think of trying to move a chess piece in a way that violates the rules. (GPT-3, however, doesn’t know how to pay chess, so it can’t cheat either.) For the bot, this space of freedom — to break rules — doesn’t exist. There is no difference between what is legal and what is possible. And that’s why robots can’t have rights. Fortunately, like I say, they don’t want them either.

How did I do?