Learning Postures (& Power Poses)

There’s a long and interesting article in the New York Times Magazine that all researchers in the social (and perhaps all) sciences do well to read. It’s about the “revolution” in statistical methods that has been going on for some time now and that we ignore at our peril. (The key text is already mentioned in my readings section.) But it’s also about more inframethodological concerns, specifically the way we deal with our mistakes.

The article’s author, Susan Dominus, clearly has a great deal of sympathy for the predicament that her subject, Amy Cuddy, has found herself in. As result, we get a great deal of information about Cuddy’s emotional response to having her work on “power posing” criticized in a very public way. I strongly recommend reading Andrew Gelman’s reflections on the article and the issue at his blog as well (also on my blogroll, of course). There’s some lively discussion in the comments, which is both a discussion of critical posture and a series of examples. Indeed, I think this whole thing is a master class by Andrew Gelman in giving and taking criticism!

For my part, I think Cuddy should just have acknowledged that the effect of power posing has not been scientifically demonstrated after all and stayed on the tenure track. I don’t want to get too much into it here, but I do want to make a confession of sorts. In my writing seminars I actually recommend a form of “power posing” that I learned from Benjamin Zander:

My version of this advice isn’t about making mistakes but discovering you don’t know what you are are talking about as you begin your writing moment. Don’t put your head in your hands and moan about how stupid you are. Throw up your arms and say, “Interesting! Ignorance!” and then spend 18 or 27 minutes exploring the depth and breadth of your own own unknowning. Ignorance is an important experience to face in research; indeed, it should be a familiar one. People who are afraid of their ignorance will have a hard time learning anything. Let’s call this the Learning Posture.

Now, I’m careful not to claim that I have science to back me up on this. It just strikes me as a good attitude to have, a good pose to strike, when you’re trying to write down what you know. And, as Zander points out, it’s an excellent attitude to adopt when you make mistakes. Be fascinated by them! When you make them, get into them, be curious about them, try to figure them out. This is where you’re going to learn something.

Ironically (and as Dominus begins by suggesting), perhaps Cuddy should have taken her own advice and struck a power pose when she began to receive criticism. “I’m wrong? How fascinating! Let’s get into it.” This is certainly what I recommend doing. Hopefully that is also the lesson that you, dear reader, will take away from all this.

A Paragraph about Paragraphs

Paragraphs are the units of scholarly prose composition. They normally consists of at least six sentences and at most 200 words that support, elaborate or defend a single well-defined claim. The claim is stated in the “key sentence”, and the rhetorical posture of the paragraph depends on the difficulty that this sentence presumably poses for the reader. If the reader should find the claim hard to believe, the paragraph will support it with evidence. If the reader should find the claim hard to understand, the paragraph will elaborate on it with description or definition. (This paragraph, for example, elaborates on the composition of paragraphs.) If the reader, having already formed a contrary opinion, finds it difficult to agree with the claim, the paragraph will defend the claim against the reader’s objections. Whatever its posture, the sentences in the paragraph are trying to leave the claim in the key sentence with the reader. It is what readers should take with them into the next paragraph. A simple list of the key sentences in a scholarly text, therefore, should provide an accurate sketch of the whole composition.

Addition and Composition

Imagine an accounting teacher who discovers that significant number of her students can’t add up a column of, say, ten eight-digit numbers manually. They can put it into a spreadsheet and SUM a column, but they are not able to add numbers together themselves without the aid of the computer. Next, imagine an English teacher who discovers that a significant number of his students can’t compose a paragraph of complete, grammatical sentences. They throw a bunch of words together (some copied from the Internet) in their word processor and let the grammar checker tell them what to do with them. The accounting teacher, of course, ultimately wants to teach her students, among many other things, how to depreciate assets and write off liabilities. The English teacher ultimately wants to teach his students how to do everything from scanning a poem to deconstructing a narrative. But is there any hope if the students have not mastered these basic skills of adding and writing?

I think the obvious answer is no and, fortunately, the situations I describe are not very common. I do wonder, however, if we test these abilities often enough. I fear that we let students get away with an inability to add and compose far too long. I could blame primary and secondary education for this, but I think universities must themselves insist on a certain standard and not admit students that did not acquire basic competences in school. Those that manage to get in should immediately feel their incompetence if it’s there. For those that recognize their limitations, there is, fortunately, a lot of hope.

I’m sometimes told by teachers in the quantitative disciplines that their students understand perfectly well how to make up for any deficiencies they might have. If they’re not used to solving math problems, they know they must simply dedicate a number of hours every week to training the relevant skills. Indeed, I’ve always found it amusing that “the suffering of learning” was called “pathei-mathos” in Aeschylus’s original Greek. The passion of math is to suffer and learn. This, like I say, is well understood by students and scholars in the mathematical disciplines.

I try to normalize the struggle to write well in the same way. Writing isn’t just something you’re good at or not; is is something you undertake to become at better at through suffering. That’s sort of a melodramatic word, but any good writer will tell you I use it advisedly.  I’ve recently been trying to argue that writing instruction should not always try to be “helpful” to students. We should not show them ways around the difficulty; we should encourage them to face it. It is only by going through the suffering of the trying to write down what they know, with sincere aim of discussing it with other knowledgeable people, that they will learn how to write strong scholarly prose. Words can, perhaps, be “processed” by a machine. But sentences and paragraphs must be composed by living brains. Life includes moments of struggle. Writing moments.

The Future of Objectivity (2)

Reading Bammer and Boetcher Joeres’ The Future of Scholarly Writing through the lens of “objectivity”, guided, in the first instance, by the book’s own index, reveals a somewhat embattled concept. (See part 1 here.) Objectivity is mentioned mainly as an illusory experience to be eschewed or an unreasonable demand to be rejected. In some cases (but not the one I will talk about in this post) it is granted a kind of limited dignity, but the general mood of the book’s contributors seems to be that objectivity is overrated. It has had its chance to shape our scholarship and it has been found wanting, even damaging. Some would prefer it went away altogether, others would at least open the literature to alternatives. (Of course, it has already been opened in this way in places.) For my part, I am happy to acknowledge that the pursuit of objectivity not only has a cost but that some of these costs have, historically, been too high. At the end of the day, however, I hope the notion of objectivity retains its epistemological legitimacy. It is worth pursuing–perhaps not solely for its own sake, but for ends that are in fact quite valuable.

“I have no problem declaring,” says Leo Spitzer, “that I am not an ‘objective historian’ in the old sense”  (193). Indeed, he “abhors” the traditional “neutrality” of historians and their “omniscient narratives.” He does not, like them, “claim to stand above the fray” but is, rather, an “engaged historian” who writes in a “personal voice”. Perhaps most tellingly, he refers to this as “so-called” academic writing.

He grounds his stance in his “feeling of empathy”, which he defines and specifies as “a predisposition to sympathize with the hopes, aspirations and frustrations of the subjugated and displaced” and traces back to his childhood among “German speaking, largely Jewish refugees” in Bolivia and, later, as an immigrant to the United States. But it is, perhaps surprisingly, not this personal background that he uses to justify his rejection of objectivity. Indeed, he seems, for a moment, apologetic about the limits of his empathy, his inability to cultivate it to the point of an objective understanding of the people he has studied. He seems to be saying that true objectivity requires us to be able to see the world exactly as someone else does, to “get inside their skins”. It is, in the first instance, because this is beyond the reach of even his empathy, that he cannot declare himself an “objective historian”.

I don’t share this view of objectivity. I don’t think we can or should try to be objective about the subjective, lived experience of other people. I don’t think failing on this score is a failure to be objective. Objectivity merely requires us to note those facts that are experienced in the same way as any other person who is acculturated to experience them in that way. A fact can be experienced differently by different people. An “objective” fact is what everyone who properly trained to do so sees in it.

Spitzer rejects this kind of objectivity too, however. His goal, he says, “is not to provide a seamless impersonal narrative in which subjectivity and emotion are suppressed or left unacknowledged.” But that’s putting it rather strongly. After all, many claims of historians are perfectly objective acknowledgements of the subjectivity and emotion of the people they study. (My favorite example is when Daniel Defert declares that “Wage-earners liked having the right to find employment where they pleased.”)  We can very definitely write a seamless impersonal narrative in which subjectivity and emotion are foregrounded and acknowledged.

For Spitzer, however, it is necessary to reject objectivity because historians are themselves implicated in the historical process. This is perhaps the best argument he offers, since it points to the very real reflexivity of all social science, not just history. We are in most cases members of the cultures were are studying, a fact that I have recently myself invoked for my own purposes. (Of course, my argument there was not against being objective in our scholarship, but against being objective about our scholarship. The distinction is an important one, but one I’ll leave for another occasion.) Here’s how Spitzer describes his perspective on history and historiography:

For me, the voice (or, perhaps more accurately, the voices) of the historian, however interested, as well as the multiple voices and memories of the participants–the stories they tell and how they tell those stories–are as much part of the fabric of history as are written records and other archival materials. To take into account and reveal subjectivity and affect–to consider what is remembered as well as what is forgotten, fears as well as imaginings, the apprehension and misapprehension of events–complicates and restores a measure of contingency to history. It deepens our historical understanding and helps us to resist interpretative closure.

There is little one wants to disagree with here except the suggestion that these goals are beyond the reach of “objective historians”. Indeed, one wants to say that in order to do justice to the many voices of the people who live and shape history one might usefully document them, not just in the first-hand personal account of the historian, who hears a story and tells it to the reader, but by corroborating those aspects of the stories that can be corroborated and thus made better able to stand up against the “contingency” of history. That word, after all, denotes the well-known possibility of “rewriting” history, to include those previously excluded and exclude those previously included, to redistribute the emphasis, to shift the blame, to declare the victor. Historians play a significant role in history for this very reason, but we do not want them to be merely another set of history’s actors, another group of participants. We want them to, use Spitzer’s own phrase, “to stand above the fray.” To cultivate an impartiality.

Surely a historian can acknowledge multiple stories, multiple accounts, multiple voices, and not declare any of them “objectively” true? The objective facts of history are that the stories are being told.


I want to talk about something that has a bit of history and can get very complicated. Back in the 1960s Michel Foucault and Roland Barthes began to question whether “the author” had a future in a “postmodern” culture. (They didn’t call it that, but it’s what we have come to call the future they were talking about, which bears some resemblance to our present, of course.) Foucault asked us to consider what the function of the author in discourse has historically been and whether that function might not change. Indeed, is it not possible that “the author function” could be transformed entirely beyond recognition? This would in effect be what Barthes called “the death of the author”.

Like I say, this isn’t a new issue. It’s been a topic of discussion for fifty years now and I’m sure there are people who will take issue with my simple-minded statement of the problem even in the preceding paragraph. I want to take it up within the limited domain of “scholarly” discourse. Is the problem of authorship different for scholars? Does the “scholar function” mark a different set of problems?

In the natural sciences, this hasn’t really ever been an issue. Scientists do not consider themselves primarily writers but, precisely, scientists. They do their work in labs and observatories or out in the field. They worry more about coordinating their hands and eyes with the apparatus than they do about finding their “voice” in the “text”. There aren’t really “authors” in science, except in very special cases, usually when a scientist begins to “popularize”. Carl Sagan, for example, had an “author function” in discourse. But it must be noted that this function doesn’t really have a place in the technical, scientific literature. It’s in public discourse that his voice is heard.

Since the early twentieth century the social sciences have–sometimes eagerly, sometimes reluctantly–tried to emulate the discourse of the natural sciences. While social theory was once organized around a few major figures–Marx, Weber, Habermas, to take one famous lineage–it is now increasingly organized into research projects with rather more anonymous contributors. Social scientists spend less of their time reading and critiquing their precursors these days and more of their time generating new “empirical results”. Even before Foucault, Martin Heidegger was noticing this more “incisive atmosphere” in academia, which was shaping minds of “a different stamp”. They spent less of their time reading, thinking and writing, and more of their time travelling, meeting and negotiating. Today we might add that they spend a great deal of time writing grant applications and research assessments. “The scholar disappears,” as Heidegger put it.

Maybe I’m a romantic, but I don’t think we should abandon the author function in scholarship. Foucault found the traditional questions we ask about an authors “tiresome”:

  • “Who is the real author?”
  • “Have we proof of his authenticity and originality?”
  • “What has he revealed of his most profound self in his language?”

He wanted to ask other questions:

  • “What are the modes of existence of this discourse?”
  • “Where does it come from; how is it circulated; who controls it?”
  • “What placements are determined for possible subjects?”
  • “Who can fulfill these diverse functions of the subject?”

It’s not that I don’t think Foucault’s questions are interesting to ask about major writers like Shakespeare and Flaubert. It’s just that I’m not sure they are very helpful in thinking about our own work and that of our peers. I always think of something Wayne Booth said in the preface to his Rhetoric of Irony:

“I have heard it said that the two standard tutorial questions at Oxford are “What does he mean?” and “How does he know?” I doubt the report—no university could be that good…”

I think these are good questions to replace Foucault’s “tiresome” ones, and better than the ones that Foucault proposes, since they threaten to do away with the author altogether. I don’t think we should always reduce “How does the author know?” to question about the control and circulation of discourse. We shouldn’t always think that “authors” must account for their “authority” (and hence their authenticity). We should mean these questions in the ordinary sense of what justification the author has for believing the claims made in the text. Are those justifications also good enough for us?

I truly believe that our discourse needs to recover its author function. It should be possible ask each other what we mean and how we know without raising profound questions of authenticity or dissolving them in “diverse functions of the subject” (collaborating with other subjects on a proliferating network of social media). I’m not saying an analysis like this can’t bring interesting features of discourse to light. I’m just saying we shouldn’t be embarrassed to presume the authority we need to speak our minds. And while I would never demand that authors reveal their “most profound selves” to me in their writing, I do expect to learn what is on their minds from reading them.