Law Review Editors, Take Note:

I just wanted to stress that the Tulane Law Review article incident isn't just an interesting story of academic error -- it's also a story of law review embarrassment. I'm pretty sure that no law review likes to have to post on its front page,

Erratum

The Louisiana Supreme Court in Question: An Empirical Statistical Study of the Effects of Campaign Money on the Judicial Function published in Volume 82 of the Tulane Law Review at 1291 (2008), was based on empirical data coded by the authors, but the data contained numerous coding errors. Tulane Law Review learned of the coding errors after the publication. Necessarily, these errors call into question some or all of the conclusions in the study as published. The Law Review deeply regrets the errors.

I assume the law review will also have to publish a print correction. The incident also led the law school dean to have to feel obligated to publicly apologize for the errors in the article, and though the apology said the law review members did nothing wrong, the matter can't have been great for relations between the dean and the journal. And I suspect the incident in some measure tarnished the law review's brand with local employers, especially those who are friendly with the judges whom the article criticized based on inaccurate information (and an unsound confusion of causation and correlation).

Of course, law reviews must accept the risk of public hostility when they publish articles that criticize much-liked people and institutions. That's part of law review editors' responsibilities as participants in the scholarly publishing process. But the hostility is likely to be considerably higher when the criticisms prove to be based on error. And it's one thing to incur unjustified hostility in the service of truth, and quite another to incur justified condemnation because one's institution has been mistaken.

So it seems to me that there are three important lessons here:

1. When the author's article rests on data that you can check, check it. Here, the data was information about who voted which way in certain cases, and who got what contributions from whom -- something cite-checkers are amply competent to check; and checking the data for fewer than 200 cases is not a crushing burden.

If the data had been in footnotes or in an appendix, as it is in many articles, the law review would have checked it. That the data never made its way into a print article is no reason to skip checking it (as this incident illustrates). The printed article, after all, relied on the data, and errors in the data infected the information reported by the article. Had the law review done the cite-checking, they might have avoided the embarrassment to themselves, their dean, and (incidentally) the authors.

2. Look closely through the article's description of what it's saying, and watch out for self-contradiction (especially when the article is controversial enough that authors might be tempted into some self-contradictory self-protection). So when a footnote says,

It is worth observing that this Article does not claim that there is a cause and effect relationship between prior donations and judicial votes in favor of donors' positions. It asserts instead that there is evidence of a statistically significant correlation between the two,

but the rest of the article repeatedly suggests causation -- for instance, saying that "This empirical and statistical study of the Louisiana Supreme Court ... demonstrates that some of the justices have been significantly influenced -- wittingly or unwittingly -- by the campaign contributions" (emphasis added) -- you should note the contradiction, and insist that the authors revise their claims to be internally consistent.

3. Finally, remember that correlation is not causation. If authors give evidence of correlation and from there makes claims of causation, make sure that the evidence adequately supports the claims, for instance by controlling for possible confounding factors. If the claim is that X (here, contributions) causes Y (voting patterns), consider what things may cause both X and Y (for instance, even though ice cream sales and the rate of forcible rape are closely correlated, might something else cause both, rather than ice cream sales causing rape?). Look also whether the causation leads the other way, which is to say that Y or predictions of Y can cause X: For instance, might a contributor's prediction of a judge's voting patterns lead him to contribute to the judge's election campaign, even if the contribution in no way influences the judge's vote? And if there are possible other explanations, does the author deal adequately with them.

Coming up with these alternative explanations doesn't require an understanding of statistics; even law review editors with little mathematical skill can do this. And law review editors should ask such skeptical questions just as they should look for counterarguments to authors' key doctrinal or normative assertions, and make sure that the authors deal with at least the main such counterarguments. If the authors do a poor enough job of dealing with these counterarguments, you should reject the article; or if you think the article is basically sound but needs to respond to those counterarguments, you should insist that the authors deal with them.

Authors should rightly have a great deal of discretion in how they craft their arguments. But when they don't adequately respond to the obvious counterarguments to their main assertion -- for instance, when they claim causation based on correlation, but don't control for obvious confounding factors -- part of your job is to call them on this.

And if you don't, when others call the authors on the errors, the result can be embarrassment for you as well as for the authors.