A New Kind of Law School Ranking:
In a forthcoming issue of The Green Bag, Editor-in-Chief Ross E. Davies announces that the Bag will introduce a new kind of law school ranking: The Deadwood Report. Davies explains:
  Law schools generally hold themselves out as institutions led by faculties whose members are committed to teaching, scholarship, and service. This does not mean that law schools employ some faculty to teach, some other faculty to engage in scholarship, and some other faculty to engage in service. Rather, faculty members contribute in all three areas. . . .
  The Deadwood Report will simply test the accuracy of that picture. Our focus will be on the most dully objective of measures: whether the work is being done -- whether each law school faculty member is teaching courses, publishing scholarly works, and performing pro bono service. . . .
  Bearing in mind that we will inevitably have to work out kinks as we go along, we plan to proceed roughly as follows:

  Step 1: We will download a law school's web pages containing (a) its list of "faculty"; (b) its current and recent course schedules and catalogs; and (c) its individual faculty profile pages containing vitas or lists of publications.
  Step 2: We will compile our data. We are interested in providing information about the current state of a school's faculty, so our focus will be on recent scholarship and recent teaching (and, in due course, recent service). A school whose faculty is heavy with people who used to be active might do well in a citation or reputation study, but it will do poorly in the Deadwood Report. After all, should today's students be enrolling in schools where the faculty used to be engaged, or in schools where the faculty is engaged now?
  Step 3: We will analyze. We are still working on the finer points of our sorting and weighing of various kinds of teaching and scholarship, but we are committed to a few basic ideas, including the following: First, we are interested in well-rounded, active faculty members, and so we will give more weight to the moderately active teacher-writer than to the hyper-writer who neglects teaching or the hyperteacher who neglects writing. A specialist in neglecting both won't be worth much. Second, we are interested in wellrounded, active faculties, and so we will seek to avoid perpetuating illusions of faculty strength that can result when one or two or a few members of a faculty publish and teach a great deal, while the rest do relatively little or nothing. . . .
  Step 4: We will send each school's dean our school-specific preliminary results, and invite him or her to send us a reasonably quick response identifying any inaccuracies in our work or on the school's website.
  Step 5: We will correct our errors. Then we will re-visit each law school's website and incorporate any corrections we find there.
  Step 6: We will publish our results.
  Step 7: We will do it all over again for the next school year.
  Inside Higher Ed has an interesting story on the proposal, together with reaction to it from the likes of Brian Leiter, Carl Monk, and David Van Zandt.

  I don't know if the rankings will be useful, but I would wager that the existence of "The Deadwood Report" causes law schools around the country to make sure that their websites have this information readily available and fully up to date. And if that's right, the Deadwood Report will be a success.
I don't know if the rankings will be useful, but I would wager that the existence of "The Deadwood Report" causes law schools around the country to make sure that their websites have this information readily available and fully up to date. And if that's right, the Deadwood Report will be a success.

It will be good if it pushes schools to do that. It'll be bad if it makes the law schools into even more aggressive propagandists than before and if they work harder at spinning their "information."
2.26.2008 2:11pm
Just Saying:
I dislike the explicit assumption that recent scholarship correlates with teaching skill. When I was a 1L, my torts professor was doing some cutting edge work on international arbitration. That, of course, did me absolutely no good. If anything, less scholarship might reflect that the prof is dedicating more time to teaching-- something students might care about. Except for extremely attenuated benefits, there's not much reason for students to care about what scholarship their professors are engaging in.
2.26.2008 2:40pm
Some (mail):
Just a thought: Will this ranking system exclude professors invited to teach at other institutions for a semester or year? One of the problems at schools such as my Alma Mater is that schools with higher rankings in US News frequently "tried out" my favorite professors -- who officially remained on the faculty and on the school's web site during their time away -- before deciding whether to poach outright.
2.26.2008 2:48pm
What about reputation? We know Yale is a good law school, for example, because it is Yale.
2.26.2008 3:05pm
I would prefer having a professor who can actually teach above all other criterions. Sure, if you find one who's a great teacher and happens to also publish a lot or do great pro bono work, fine. But what's going to help students most is not someone so busy on his own scholarship that he leaves most of the actual work of teaching his class to an assistant.

Having a small handful of professors there primarily for their scholarship or to lend prestige is one thing, but I'm fed up with the idea that ability to teach is the absolute last thing considered in evaluating a teacher.
2.26.2008 3:19pm
Just Saying:
Kenvee-- Agreed. As we've all read about, court citations of legal scholarship is way down. Students certainly don't read them. A practitioner will generally avoid law review articles as too theoretical. If there's any indication that policymakers look to law review articles for guidance, I haven't seen it. So, who benefits from law schools insisting on increasing amounts of shovelware scholarship?
2.26.2008 3:36pm
kiniyakki (mail):
Regarding the requirement that a professor have a "service" component to their work - a couple thoughts. First, past work experience should somehow be relevant. A person who worked 10 years at a major firm making lots of money is different from a gov't attorney who served country/state making less money and/or a person working in something like legal aid. Second, scholarship will presumably be evaluated based on a person's scholarly contributions over time - since a person cannot be "scholarly" every day. So, service should be measured the same way. Perhaps a five year picture would be appropriate to evaulate both scholarship and service (and teaching for that matter).
2.26.2008 4:05pm
cledit (mail):
The presumption that "recent scholarship = good teaching" is not supported at all in the article and it runs counter to the existing research on that very issue. (The research shows, as everyone would expect, that teaching and scholarship aren't correlated.)

So, if the metric doesn't measure what it claims to measure, what is its purpose? Well, that metric would benefit any law school with lots of high quality recent publishing but a lower USNWR rank than its energetic profs believe it should enjoy. For example, George Mason.
2.26.2008 4:08pm
M (mail):
When I was in law school there was a quite strong(not perfect, but quite strong)positive relationship between being a good scholar and being a good teacher. At the very least the idea that two are in conflict is highly dubious.
2.26.2008 4:33pm
LM (mail):
Unless they plan a more nuanced test than can be discerned from the article, this sounds sketchy on several grounds:

1. How is public service a measure of faculty quality? (And I ask that as an advocate and avid practitioner of private sector pro-bono work). Law clinics, to take one example, can be great educational vehicles, but doesn't that answer a different question than how much public service is being provided?

2. Unless they plan to factor in some sort of faculty/student ratio, downgrading faculty who specialize in teaching or writing makes no sense. Compare a school that can afford a very large faculty, some specializing in teaching, others in writing, to one with an undersized faculty of generalists. The former will provide a better instruction* and generate more scholarship, but the latter will grade higher on this test. (* Assuming, as this ranking seems to, you elide the whole issue of how to compare the quality of one instructor to another.)

3. Disqualifying publication in the "home school" journal also seems questionable at best. Everyone wants to publish in the Yale or Harvard Journal/Review. Does it really give a fairer picture to penalize Yale and Harvard faculty who get in where everyone's trying to be than to give a little unearned credit to a guy at Ohio Northern who the resident journal may throw a bone? Does any top journal really dilute its quality with nepotism, knowingly or otherwise?
2.26.2008 4:48pm
frankcross (mail):
LM, there's little doubt that it is easier to publish in your "home court" journal. Just check the publication records of even some top researchers. Of course, it is unfair to punish Harvard and Yale profs for getting into their top journals, because some portion of those placements are deserved. However, because there is clearly some bias at all law schools for articles from professors from top schools, the discrimination against home court publication is probably more than countered by bias such profs receive at other high level journals.
2.26.2008 8:49pm
Michael J.Z. Mannheimer (mail):
The research shows, as everyone would expect, that teaching and scholarship aren't correlated.

Actually, my understanding is that some research shows a positive correlation between teaching and scholarship, some shows a negative correlation, and the most recent piece I've seen shows no correlation. I began reading that last-mentioned piece but stopped when I read that teaching skill was measured entirely on the basis of student evaluations. Student evaluations are a notoriously uneven method of measuring teaching skills -- see Deborah Jones Merritt's recent writings on the subject. For example, a typical student evaluation might ask how well the professor responded to questions. More astute students will realize that this means: "How well did the professor guide the students to answer their own questions for themselves?" Less astute students will think it means: "Did the professor spoon-feed me information so I could pass the final?" The sad truth is that skillful teaching is virtually impossible to measure other than through anecdotal evidence.

So what does the anecdotal evidence tell me? I certainly think I am a better teacher because I am a productive scholar. My scholarship forces me to keep up-to-date in the ever changing world of constitutional criminal procedure and to think about the material at a much deeper level than might otherwise be the case when teaching a survey course like Criminal Law, Criminal Procedure, or Evidence. Are these things possible for someone who doesn't actively publish? Surely. But my own experience tells me there is a positive correlation between scholarship and teaching.
2.26.2008 9:12pm
LM (mail):

I certainly didn't mean to suggest there's no advantage to resident faculty. Just that the higher up the food chain you get, the more unfair the penalty of this ranking system. There are two reasons:

(1) Even assuming the top journals give an edge to their own, I can't imagine they'd risk compromising standards enough to hurt the reputation of the journal. The nepotism should be marginal, so throwing the publication out entirely seems harsh.

(2) The more prestigious the school and journal, the fewer comparable options there are for publication, thus the more perverse it is to punish someone who gets one of the few sought after slots.

To illustrate, let's stipulate that Yale has the top journal and two Yale faculty are competing for one publication slot. Typically, the better article should get it, and it will be thrown out under this rating system. The rejected article will probably go to another excellent, but less prestigious journal, and get full credit. Odd, no?

Now look at two professors from, say, Texas or Georgetown competing for one slot in their school's journal. The result will be like the Yale example, except these professors have many more comparable journals, and even a few better ones, where they can submit their articles. Bypassing their home journals entirely would remove a much smaller percentage of comparably prestigious publication options. Denying credit to the article of the resident faculty in this instance seems at least incrementally less punitive.

And so on down the line. Moreover, while the articles by resident faculty at the top journals should almost always be marketable to other respectable journals, by the time you're descending the bottom tier, if your school's journal won't publish you, there's a pretty good chance nobody will. In that case, the nepotism seems most significant and most appropriate to take account of in the ratings.

Am I wrong? I don't teach, so maybe I'm clueless about what really goes on here. But that's what it looks from the cheap seats.
2.27.2008 3:12am
LM (mail):

But that's what it looks like from the cheap seats.

And that's how poorly we proofread in the cheap seats.
2.27.2008 3:15am