Replication and Criticism

“Our job as scientists is to discover truths about the world.” (Simmons, Nelson & Simonsohn)

If I could get every PhD student to read two papers today, I would make them read Joseph Simmons, Leif Nelson and Uri Simonsohn’s “False-Positive Psychology” (Psychological Science, 2011) and Anne-Wil Harzing’s “Are our referencing errors undermining our scholarship and credibility?” (Journal of Organizational Behavior, 2002). If I end up having any influence on the research practices of the future, these two papers are destined to become classics of inframethodology. That is, they are not about our methods so much as about the craft beneath those methods. They are about the care with which we apply our methods–whatever they may be.

Both papers are written with admirable clarity and directness. They do not skirt the issues they raise and they offer concrete solutions to address them. The core of both articles is a list of rules (or requirements or guidelines) that should govern the composition of published work. That is, they are not saying that we should do any one thing or another in our research process, nor are they encouraging us to adopt a particular theory or method or epistemology. They are merely saying that we must write about what we have done and what we have read in particular ways. Once we are committed to these rules, to be sure, we will become unable to say certain things about our data or our sources, at least with a straight face. Ultimately, we will become unable to do certain things, at least with a clear conscience, or reach particular conclusions. But these effects will follow indirectly from being forced, simply, to care about the impression we leave in the minds of our readers.

What is called “the replication and criticism crisis” in the social sciences is actually, if you ask me, a crisis of care. Under increasing pressure to publish, we seem to have lost the ability to care about whether what we say is true. What Simmons, Nelson, Simonsohn and Harzing are trying to tell us is that such caring must become, again, a condition of publication, not a personal decision that is unrelated to the publishability of your result. We must become more aware (and more open) about what Simmons et al. call “researcher degrees of freedom”, which, if left undeclared, let us reach any conclusion we like and call it “significant”. Harzing, too, is pointing out that researchers often take too many “liberties” in their interpretation of their sources, allowing them to construct literatures that offer somewhat too convenient occasions for their own studies to be published. In one case we generate “false positives”, i.e., results that have no basis in reality. In the other case we manufacture the ignorance we pretend to ameliorate, invent the problem we presume to solve.

Simmons et al. make an absolutely crucial point in their concluding remarks.

This is not driven by a willingness to deceive but by the self-serving interpretation of ambiguity, which enables us to convince ourselves that whichever decisions produced the publishable outcome must have also been the most appropriate.

I sense this ambiguity in my discussions with many of the authors I work with. My job, after all, is to help get them published. They tell me what they think their reviewers and editors demand of them. I ask them to care about their readers. If the rules in these papers were more rigorously observed by our journals, these two sets of concerns would not be so often at odds with each other. And my advice, perhaps, would not seem to my authors to be as “idealistic” (or perhaps simply quaint) as I get the sense it does today.

I hope Andrew Gelman is right about the change in the weather.

Leave a Reply

Your email address will not be published. Required fields are marked *