Law Review Editors, Take Note:

I just wanted to stress that the Tulane Law Review article incident isn't just an interesting story of academic error -- it's also a story of law review embarrassment. I'm pretty sure that no law review likes to have to post on its front page,


The Louisiana Supreme Court in Question: An Empirical Statistical Study of the Effects of Campaign Money on the Judicial Function published in Volume 82 of the Tulane Law Review at 1291 (2008), was based on empirical data coded by the authors, but the data contained numerous coding errors. Tulane Law Review learned of the coding errors after the publication. Necessarily, these errors call into question some or all of the conclusions in the study as published. The Law Review deeply regrets the errors.

I assume the law review will also have to publish a print correction. The incident also led the law school dean to have to feel obligated to publicly apologize for the errors in the article, and though the apology said the law review members did nothing wrong, the matter can't have been great for relations between the dean and the journal. And I suspect the incident in some measure tarnished the law review's brand with local employers, especially those who are friendly with the judges whom the article criticized based on inaccurate information (and an unsound confusion of causation and correlation).

Of course, law reviews must accept the risk of public hostility when they publish articles that criticize much-liked people and institutions. That's part of law review editors' responsibilities as participants in the scholarly publishing process. But the hostility is likely to be considerably higher when the criticisms prove to be based on error. And it's one thing to incur unjustified hostility in the service of truth, and quite another to incur justified condemnation because one's institution has been mistaken.

So it seems to me that there are three important lessons here:

1. When the author's article rests on data that you can check, check it. Here, the data was information about who voted which way in certain cases, and who got what contributions from whom -- something cite-checkers are amply competent to check; and checking the data for fewer than 200 cases is not a crushing burden.

If the data had been in footnotes or in an appendix, as it is in many articles, the law review would have checked it. That the data never made its way into a print article is no reason to skip checking it (as this incident illustrates). The printed article, after all, relied on the data, and errors in the data infected the information reported by the article. Had the law review done the cite-checking, they might have avoided the embarrassment to themselves, their dean, and (incidentally) the authors.

2. Look closely through the article's description of what it's saying, and watch out for self-contradiction (especially when the article is controversial enough that authors might be tempted into some self-contradictory self-protection). So when a footnote says,

It is worth observing that this Article does not claim that there is a cause and effect relationship between prior donations and judicial votes in favor of donors' positions. It asserts instead that there is evidence of a statistically significant correlation between the two,

but the rest of the article repeatedly suggests causation -- for instance, saying that "This empirical and statistical study of the Louisiana Supreme Court ... demonstrates that some of the justices have been significantly influenced -- wittingly or unwittingly -- by the campaign contributions" (emphasis added) -- you should note the contradiction, and insist that the authors revise their claims to be internally consistent.

3. Finally, remember that correlation is not causation. If authors give evidence of correlation and from there makes claims of causation, make sure that the evidence adequately supports the claims, for instance by controlling for possible confounding factors. If the claim is that X (here, contributions) causes Y (voting patterns), consider what things may cause both X and Y (for instance, even though ice cream sales and the rate of forcible rape are closely correlated, might something else cause both, rather than ice cream sales causing rape?). Look also whether the causation leads the other way, which is to say that Y or predictions of Y can cause X: For instance, might a contributor's prediction of a judge's voting patterns lead him to contribute to the judge's election campaign, even if the contribution in no way influences the judge's vote? And if there are possible other explanations, does the author deal adequately with them.

Coming up with these alternative explanations doesn't require an understanding of statistics; even law review editors with little mathematical skill can do this. And law review editors should ask such skeptical questions just as they should look for counterarguments to authors' key doctrinal or normative assertions, and make sure that the authors deal with at least the main such counterarguments. If the authors do a poor enough job of dealing with these counterarguments, you should reject the article; or if you think the article is basically sound but needs to respond to those counterarguments, you should insist that the authors deal with them.

Authors should rightly have a great deal of discretion in how they craft their arguments. But when they don't adequately respond to the obvious counterarguments to their main assertion -- for instance, when they claim causation based on correlation, but don't control for obvious confounding factors -- part of your job is to call them on this.

And if you don't, when others call the authors on the errors, the result can be embarrassment for you as well as for the authors.

I'm curious as to why everyone is focusing on what the students did wrong here. What about the professors? What annoys me is that those sanctimonious clowns won't suffer an iota of punishment for their incompetence: they'll float on in their cushy tenured jobs, continuing to pontificate about how corrupt everyone in the private sector and the government is, applauded by their peers.
9.23.2008 1:17am
Cro (mail):
This is an interesting counterpoint to the Bellesiles case, where the social scientists failed to catch a much more egregious error.

In this case, the error was found out much more quickly and publicly admitted. So, some things worked right. I think the law review approach (trust nothing and check all sources) is far, far superior to academic peer review.

That being said, an error was made due to some simple undergraduate level mistakes. That's not quality publishing, when making the same error could flunk a Senior Seminar paper at 4 year liberal arts college.
9.23.2008 1:41am
Eugene Volokh (www):
wm13: Let me stress what I had hoped was obvious from my first post -- the professors erred very badly here. But those errors seem so basic that it's hard for experienced scholars to find them really instructive. On the other hand, I think this case does offer some useful instruction for law review editors; hence this follow-up post aimed at law students, who I know form a decent chunk of our readership.

cro: Unfortunately, I don't think the discovery of these errors stems from the law review approach. Rather, it stems from (1) the article's being a criticism of particular influential people, who promptly asked others to check it, and (2) the article's database being easy to check, since it refers to easily available sources (published court opinions) rather than to original sources buried in archives across the country.
9.23.2008 1:52am
Mike& (mail):
I'm willing to bet that cognitive bias played a role here, too. An editor who agrees with the article's thesis is often less likely to scrutinize the article. So a journal couldn't hurt having someone who disagrees with the thesis. That is, someone who thinks, "There is no way this article is right." (Whether that person believes that because he's a partisan doesn't matter. What matters is that someone feel energized enough to attempt to repudiate the article's conclusion.)
9.23.2008 1:59am
Asher (mail):
Speaking of law reviews, say you're citing a special issue - not any particular article in said issue, the whole thing - but the editors of the law review neglected to give the special issue a name. How do you cite that? Issue 2, 26 CUMB L. REV. 237? That just looks stupid.
9.23.2008 2:01am
Cro (mail):
Ah, that's true. So, the question is how much bad research that didn't target influential people got published. I understand.
9.23.2008 2:08am
Eugene Volokh (www):
Cro: If that's the question, the answer is "a lot."
9.23.2008 2:12am
Bill Poser (mail) (www):
I don't have a reference (my mother gave me the example years ago) but reportedly in the first part of the 20th century there was a strong correlation between minister's salaries and the price of Puerto Rican rum. Presumably not causal.
9.23.2008 4:55am
Professor Volokh, you left out the most important part of determining causality: causal relationships aren't commutative. In order for X to cause Y, X has to happen before Y.

Basic, I know, but still...
9.23.2008 8:33am
I don't believe that the students bear much if any responsibility here. The erroneous data is not something that would have been checked by any law review.

The fault here lies entirely with the authors.
9.23.2008 8:45am
Having served as an articles editor of a law review, I can see how easily this sort of erratum can slip through the filters of amateur (as in untrained, unpaid and temporary) editors who have much more incentive to focus on course work and the job hunt than to pursue excellence on the law review editing professional submissions. Once elections for editorial positions have taken place, the only resume-enhancing activity that remains for senior editors is for those who are publishing their own works to present the best pieces possible. I'm not saying student editors do not pursue excellence; rather, they do so swimming against a roaring current of other priorities.
9.23.2008 9:21am
I have no knowledge of this event at all other than what I read here. Strictly as a question:

Has anyone outside of a political or state connection with Louisiana confirmed the existence and/or scope of the coding errors?
9.23.2008 9:47am
Eugene Volokh (www):
Happyshooter: Unfortunately, the law review has taken down the dataset that it originally promised to keep on its Web site for a year; it's therefore hard to double-check this. But one of the authors of the study publicly acknowledged, in an interview with the New Orleans Times-Picayune, that there were errors, though he asserts the bottom-line results remain the same even when the errors are corrected. Perhaps it's a sign of some conspiracy, though if that's so it's hard to see why the coauthor is playing along to the extent of acknowledging the errors but still saying the errors don't make a difference to the bottom line.

Of course, the causation/correlation problem is visible on the face of the article, and thus easily confirmable by all of us.
9.23.2008 10:37am
LTEC (mail) (www):
"In order for X to cause Y, X has to happen before Y."

Not necessarily. Typically I do a job because I am getting paid, but I get paid after doing the job. It's true that the promise of getting paid comes first, but that promise may be very secret whereas the getting paid part might be highly visible.
9.23.2008 10:56am
Bdog (mail):
Why is is it that no one has bothered to remark on the policitcal affiliations of the Judges? Without reading either the article or the rebuttal, I would bet that the accused judges are probably considered 'conservative', if not actual members of the republican party.

My reasoning? Simple logic and statistics: Professors are democrat 10-1. Law students, especially those on the Law Review are probably democrat by 30-1.(And personally, I would also bet money that on most Law Reviews, republicans/conservatives aren't represented at all.) And of course, the lack of attention to detail, like fact checking, faulty statistical analysis (can anyone say global warming), are just part and parcel of what is accepted as scholarship and scientific concensus, as long as it advances the 'correct' agenda.

The editors reviewed the article and it was just too good to check. If I ever need a lawyer, and I have in the past, I'll just make sure that it isn't one from Tulane.

And of course, law professors wouldn't write an article like this about democrat judges. It. Just. Does. Not. Happen.

Of course, I could be wrong.
9.23.2008 11:38am
Ron Mexico:
"Law students, especially those on the Law Review are probably democrat by 30-1.(And personally, I would also bet money that on most Law Reviews, republicans/conservatives aren't represented at all.)"

Why would you possibly think this? Do you believe that democrats are just inherently so much smarter than republicans that the republicans are incapable of making the law review?
9.23.2008 11:45am
frankcross (mail):
Bdog, it would help to familiarize yourself with the facts. The article was equally critical about contributions from plaintiffs' lawyers influencing outcomes.

And as LTEC points out, some causal event must happen first, but it needn't be the payment itself. The implicit promise of compensation could be the causal event.

I like the idea of law review editors checking data, when that is feasible. It's not done by peer reviewed journals, but this could be a real contribution of law journals to empirical research.

As for correlation/causation, it's very true, but I would just note that people in journals attribute causation all the time. With less evidence than a statistical correlation. I'm perplexed by how people criticize those who use correlations as evidence of causation but give a pass to those who use only anecdotal evidence.
9.23.2008 11:58am
And I suspect the incident in some measure tarnished the law review's brand with local employers, especially those who are friendly with the judges whom the article criticized based on inaccurate information (and an unsound confusion of causation and correlation).

EV, I hope that this is not so. I expect it will not impact the hiring decision of private law firms in the local market. It might make a state supreme court clerkship difficult for Tulane grads to get for a couple of years; you'd like to think judges would not take it out on young law students but who knows.

I am struck by the court's using its own website as a platform to lash back out at private critics, and to gloat about the Tulane dean's apology. You get the impression that an article critical of the court would NEVER have been published at a state school, even if it were very well grounded.

The further irony is that while they denounce the very idea that campaign contributions influence their behavior, they are raising money as we speak (I got an invitation to a fundraiser just last week); and two judges are campaigning for an open seat on the supreme court, one of whom is running an ad denouncing "greedy trial lawyers." And about 10 years ago, Tulane's environmental law clinic notched up an impressive string of victories in environmental litigation- which were promptly followed first by industry protests, and second by the state supreme court adopting the most restrictive student-practice rules in the nation. You know, for the protection of the public.

So yeah, maybe if contributions don't determine how they vote after getting elected it's only because they sold themselves to contributors promising voting outcomes before they get elected. What's the difference?

But if the article contained bad scholarship, that much needed to be admitted. I hope someone does a more thorough followup- I think this topic is one deserving of study. But I get the feeling nobody in Louisiana is going to be eager to look at this issue again.
9.23.2008 12:54pm
Speaking as a recent editor, I think law reviews (including those at the "top" schools) are still trying to establish a set of best practices for cite checking empirical pieces--including (or perhaps especially) pieces, like the one in question, that aren't really "empirical" in the sense of using massive data sets and complicated regression analysis, but involve coding lots of cases and drawing conclusions from the results.

For some pieces, it's not really feasible to try to recheck the author's coding for accuracy. For example, if an author has gone through hundreds--and perhaps over a thousand--circuit court opinions using some form of agency deference and categorized each along, say, five different variables, my sense is that editors are probably going to take the author's word for it (and perhaps post the raw data on line or in an appendix as a substitute for thorough cite checking). Needless to say, this opens to the door for serious errors to creep in--e.g., what happens when the author miscategorizes many of the cases in the study?

One obvious benefit of student editing, as compared to peer review, is that students are generally willing and able to do the hundreds of hours of grunt work involved in cite checking. But the recent trend towards empirical pieces puts a strain on those resources, which are great but not unlimited.

EV, do you have a sense of how law reviews should cite check pieces like this one? It sounds like this particular article might have not have involved so many cases that regular cite checking was impracticable (and, as noted, may have had other red flags like internal inconsistency). But what if the study had been nationwide and had relied on 1,000 cases? Slog through it (or don't accept the piece for publication if you're unwilling to do so)? Trust the author, but publish the data in an appendix? Some hybrid (e.g., spot check some portion of the cases to see if they author is generally accurate)? It seems like an issue of increasing importance going forward.
9.23.2008 1:06pm
Xanthippas (mail) (www):

Law students, especially those on the Law Review are probably democrat by 30-1.(And personally, I would also bet money that on most Law Reviews, republicans/conservatives aren't represented at all.)

That's a ridiculous statement and I'm not sure what you think the nature of law review is that conservatives would be discouraged from participating in it. I attended law school in Texas, and there were a fair number of conservatives on law review. In fact, it was probably an even split.
9.23.2008 1:08pm
Bill Poser (mail) (www):

A social science is one in which "data" is the plural of "anecdote".
9.23.2008 1:08pm
Conservatives are discouraged from law review participation. I am conservative and was on law review.
9.23.2008 1:17pm
Eugene Volokh (www):
Albert: The tarnishing of the brand happens subconsciously, I think. I don't think virtually anyone will say "Oh, you were on the law review when it published that article that was chock full of errors about people I admire; I won't hire you." But for some such employers, the first thing they'll think about when they see "Tulane Law Review" on the resume will no longer be "elite, hard-working, good grades, good writing and editing experience" but rather will be "screwed up, didn't check the data, had to publicly apologize, published unfounded allegations against people I respect."

Instead of having strongly positive connotations, the brand may, at least for a time, have negative connotations, at least to many Louisiana lawyers and judges. That may be unfair to individual law review members, but my sense is that this is the way branding works in people's minds.
9.23.2008 1:46pm
I'm afraid it is my nature to point out the obivous here:

Forcible rape causes ice cream sales.

Baskin Robins in a major benefactor of this principle, and I'm very surprised victim advocates haven't started a boycott already.
9.23.2008 1:57pm
Sean M:
I think Anon321 is correct. Cite checker time is cheap, but not free. It is plentiful, but not infinite. Empirical studies strain those finite resources.

Also, as a cite checker, I beg people to distinguish between editors (who are responsible for the flow and logic of the piece) and cite checkers, who are responsible for, well, the cites -- and perhaps some very light textual editing (for grammar, spelling, and so on).
9.23.2008 2:09pm
Dave N (mail):
I suspect that law review membership pretty well tracks the student body as a whole. That is, if the student body is 60% liberal, 40% conservative (or whatever other percentage you choose), then that will be reflected in the membership of the law review.
9.23.2008 2:23pm
TJ (mail):
There are three other flaws in this article that I believe the law review editors could have and should have recognized.

First, there is no indication, or discussion even, of whether the actual votes of the justices in a particular case were objectively wrong, outside of the majority view, or against the justices' judicial view, such that the campaign contribution could have had any measurable influence upon a justice's particular vote. I mean, proof of influence from a campaign contribution would have to involve a showing that a justice voted a particular way due mainly to the fact of the contribution, and not due to his or her appreciation of the legal principles and facts involved in the case. Without such a showing or evidence of quid-pro-quo, then there can be no reasonable or non-speculative, or non-defamatory, discussion of causation, so the correlation produced is simply of no moment. Indeed, there is no discussion of the legal and factual issues presented in any of the cases used by the authors in their so-called study.

Second, the authors of the article looked only at contributions from lawyers, rather than include those from PACs or business and insurance groups, or individuals. However, it is well known that business and insurance groups have a keen interest in Louisiana judicial elections, and donate large amounts of money to judicial campaigns. Thus, focusing solely on lawyer contributors, and not including other types of contributors, is at the outset an inquiry of very limited and distorted value, if any.

Third, the authors of the article clearly presume that a justice who has a known judicial philosophy of, say, being pro-defendant and then votes pro-defendant in nearly every case is de facto a less-biased and less-easily-influenced judge than one who votes roughly equally for the plaintiff and for the defendant. The underlying premise of the article is that pro-defendant, a/k/a conservative judges are not influenced by contributions whereas voting for the plaintiff means the judge is influenced by contributions and renders the judge a pro-plaintiff, a/k/a liberal judge. The authors' parameters were clearly pre-selected to support such a correlation, but such a premise is preposterous in my view, ignores reality, and is tantamount to claiming that defendants are always innocent of any wrong-doing and plaintiffs are always bringing frivolous law suits them.
9.23.2008 2:29pm
KS (mail):
I'm continually amazed that empirical work is published in law reviews and relied upon without a peer-review process. Given how quickly the error was found, I doubt this article would have been published as-is in a peer review journal.

The article has already done its damage: Kitty Kimball is getting very bad press and has been attacked during the current judicial elections. Her opponent still uses the article on his website:
9.26.2008 1:05pm