[Andrew Morriss (guest-blogging), August 15, 2005 at 9:03pm] Trackbacks
Scholarly impact and rankings:

Rankings systems like U.S. News' require lots of data to make them work. This is problematic with things like the reputation survey because the survey is likely generating mostly silly numbers. But there is some hope that a new source of data is going to be available via SSRN.

At the rankings conference last spring, UT Prof. Bernard Black and U. Cinncinnati Prof. Paul Caron presented a work in progress on the SSRN rankings. Unfortunately it doesn't appear to be on line yet, but it should be at some point since it is in the conference issue of the law journal that will come out later. You can look at the tentative versions of SSRN's rankings data of schools and authors (link requires logging in, I believe).

Assuming they fix it so I can't enhance my rankings by paying my kids to download my papers repeatedly, which I am sure they will, this is potentially a very interesting development. Having a reasonably objective measure of faculty productivity and impact will give a much better picture of schools' scholarship. Not perfect, of course, but better than what we have now.

UPDATE: The Black and Caron paper is available here.

Do you think there a correlation between how many times a professor's law review article is cited and the quality of education a student receives from him/her?
8.16.2005 8:47am
Lawbot, I doubt it. Hence the heading "scholarly impact" heading and not "teaching quality."

For better or worse, there are a number of reasons other than educational quality that factor into prospective law students' decisions. And, if a student wanted to be an academic, going to a school full of professors who write well and choose their topics well might be a bonus if the student expected to have an opportunity to pick those professors' brains about their writing.

On the other hand, a student who picks law schools based on the quality of education he/she expects to receive may want to know about a school that has a disproportionate scholarly impact (i.e. the faculty is so productive that there's no way that they have time to do a competent job of teaching).
8.16.2005 9:58am
Meredith (mail):
Lawbot, I would imagine that someone who gets cited a lot knows how to get cited a lot, so she could teach that better than someone else...

Sure, there are a lot of professors who don't get published as much who are still very good, but the legal field is about writing skill, and the most cited are probably near the top of their game in other areas, so it's probably at least a reasonably reliable method.

I just wonder if WB gets any help from his previous link to his paper?

Just kidding, i just found the link placement amusing.
8.16.2005 10:27am
"Assuming they fix it so I can't enhance my rankings by paying my kids to download my papers repeatedly, which I am sure they will, this is potentially a very interesting development."

While it is certainly an interesting development, they would have to make a lot more changes to SSRN than solving the above problem to make the ratings particularly meaningful. Just to state a few problems with this method of ratings:

1. Economics professors, corporate scholars, and IP scholars use SSRN much more often than other legal professors. Their papers dominate the most downloaded papers lists. While these are interesting field, they are extremely overrepresented in the ratings.

2. Blogs make a big difference: With no offense given to the fine writers at volokh, of course, who would no doubt be well received without their blog, a mention on one of the more popular blogs can have a very substantial impact on downloads. Take a look at the SSRN page for that "Perfect Crime" article in Georgetown that Volokh posted about a few months ago. It's a fun article, but should it really have far more downloads than anything else Georgetown is publishing this year?

3. Specialty articles make too big of a difference: Articles that rate law schools or law faculties, for example, get a huge number of downloads; typically more than 10-15 good articles at well-rregarded journals get combined. Do these articles have that much more of an academic impact?

4. A few big name scholars drastically skew things: There are a handful of scholars that get more reads than entire other schools. These scholars deserve recognition, but do they actually have more scholarly impact then entire faculties?

5. Older scholars are underrepresented: At least at my school, a number of older scholars are still quite prolific but don't use SSRN unless forced.

6. There's no standard for what to upload: Some professors only upload articles they are still working on and would like feedback on. Others upload articles they wrote 6 years ago, along with their various book chapters.

7. Downloads just aren't the same as actual citations: Setting aside whether even citations mean anything important about scholarship, a citation, at the very least, means that another scholar respected an article enough to reference it in his or her own work. Downloads don't mean the same thing. An abstract could sell a paper as having an entirely new schema for analyzing an area of law, then the paper could turn out to be garbage. But, SSRN would already reflect the paper as influential. Heck, the attached paper could be a blank .PDF with a catchy title, and it would do well.

8. There needs to be some standardized means of presenting the data: Which is more impressive? A professor who wrote one article that got downloaded 350 times, or a professor who wrote two that got downloaded a total of 500 times? If you click on total downloads, SSRN shows the answer as the latter. if you click on average downloads per paper, SSRN shows the former. Which is more significant?

Those are a few concerns that come to mind.
8.16.2005 11:11am
Reader from Boston:
It seems that the legal fields/ social sciences are far behind the hard sciences in this regard. Thomson's Web of Science pretty much sets a gold standard for ranking impact of academic authors. Many countries in Asia Pacific even use their citation rankings to make decisions about tenure, funding etc. And the web page is always worth a few minutes (and it does included some social sciences as well).
8.16.2005 12:01pm
frankcross (mail):
I have to admit that I'm a little lost by the value of the download measure. At most, it indicates that the topic or the abstract is interesting and says nothing about the quality of the paper, because it hasn't been read at the time of downloading.

Otherwise unpublished SSRNs are increasingly cited, so the citation to the manuscript seems the better measure.
8.17.2005 4:28pm