We found very strong evidence that there is a difference between the top tier of law schools and the rest of the market. This may seem obvious (of course Harvard is different from a 4th tier school) but it is important nonetheless.

Our basic regression model predicted change in median LSAT as a function of the law school's starting position (i.e. its quartile ranking in 1992), whether it was a public law school or not, average student loan debt, the change in Am Law 200 jobs in the metropolitan statistical area, and some strategic behavior variables (change in 1L class size, change in % of 1L class in part time program), and changes in academic and lawyer/judge reputation ranks. (We also tried some other variables, the full details are in the paper.)

When we ran the regressions separately for the top quartile (the top quartile is roughly "Tier 1" in current *U.S. News* terminology -- the paper explains in more detail about why we used quartiles rather than tiers) and quartiles 2-4, we found that the coefficients were quite different for the two groups. Here's one set of our results so you can see this:

Quartile 2-4 | Quartile 1 | |

Constant | .396 (.730) | -1.296 (.943) |

US News Quartile 2 | 1.704** (.380) | |

Top 16 | .845 (.537) | |

Avg. Loan Debt ($000) | -.034** (.011) | .031* (.015) |

Chg in Am Law 200 Lawyers (1000) in MSA, 93-03 | .167* (.065) | -.014 (.094) |

% chg in proportion of 1L FT to 1L FT & PT, 92-04 | -5.405* (2.219) | |

% chg in 1L FTE, 92-04 | -3.697 (2.086) | |

N | 120 | 44 |

Adj R2 | .269 | .211 |

My apologies for the ugly table - I haven't quite mastered the powerblogs table function. See Regression Model 2 in the paper for a clearer version (you can download the paper here.)

* indicates significance at the 5% level, ** at the 1% level. Standard errors are in ().

Take a close look at average student loan debt. We used this as a proxy for the real cost of a legal education, although it is far from perfect for these purposes. One problem is that this is not a change variable (we just had data for 2004 graduates), while most of our other variables (including our dependent variable) were change variables. We wish we had had better data, but sometimes in empirical work you have to use what you have. The interesting result here is that the **sign changes**. **Larger average student debt meant a larger (more positive) change in median LSAT for schools in the top quartile but had the opposite impact for schools in quartiles 2-4. **

We don't think schools in the top quartile can move up just by raising their tuition. There are some excellent schools in the top quartile that don't charge an arm and a leg (although tuition everywhere seems to be higher than the $4/credit hour I remember paying to attend the University of Texas in the 1980s.) What we do think is happening here is that the top quartile schools are selling something that students think is worth paying top dollar for, and as a result, the schools are able to charge for it. This also doesn't mean that quartile 2-4 schools are cheap. But prospective students are engaging in some shopping among the schools where they receive offers -- if it means moving up into a top quartile school (or up within the top quartile), they will pay more. But among the quartile 2-4 schools, students are more price sensitive.

As I'll discuss later in the week, Bill and I think that one of the main things the top schools are selling is access to their on-campus interview programs that include many more top legal employers (= large firms) than lower ranked schools.

This relative price insensitivity among the top quartile schools gives them several advantages over the rest of the schools. First, it means the top schools have more money. Money matters -- it buys lots of things that makes a school desirable, from top faculty to library books to improved facilities. It means the top schools have more money to buy students with high LSATs. Most importantly, it means that there are two quite different markets for law schools (or law students). Different strategies are needed to survive/rise in the top quartile than in the rest of the pack; different considerations are at work for students in choosing among schools in the two segments. More on this later in the week.

I went to an "elite" law school and used to think that everyone should go to the academically best law school he/she can, regardless of money, geography, "fit," etc. However, now that I've been on the (firm) hiring side for a few years, my opinion has changed. Most employers (other than the Cravaths and Wachtells of the world) will look at people from the top half to two thirds of a class from Yale, Harvard, Stanford, and a few others. But once you get away from the those top schools, it seems that class rank, not school reputation, is what matters. Somebody from the top of her class at Suffolk has a better chance of getting a good firm job than somebody from the middle of her class at G.W., for example.

Finally, re your conjecture that "one of the main things the top schools are selling is access to their on-campus interview programs[,]" I would go further than that. Paul Fussell says somewhere in

Class(IIRC) that Americans' version of a hereditary nobility is Ivy League degrees. A degree from an elite school gives you something that you can almost never entirely lose: Folks don't have to know what your grades were or your class rank or whether you were on law review; simply the fact that you went there entitles you to a (rebuttable) presumption of competence, intelligence, what have you. That's worth a premium.* indicates significance at the 5% level, ** at the 1% level. Standard errors are in ()."You guys made the same mistake as the last guy to post social science research on this blog:

You're doing statistical inference when there's absolutely no basis for it in your dataset.Tell me, how do you interpret your p-values and standard errors?

You're doing statistical inference when there's absolutely no basis for it in your dataset.Want to expound on this? Why is there no basis for this in their dataset?

"Want to expound on this? Why is there no basis for this in their dataset?"Let me turn the question around and ask you: What is the basis for doing statistical inference?

For example, the data aren't a random sample from a population, and none of the respondents are randomly assigned to control groups.

So what's the basis for statistical inference? To phrase it more explicitly, what's the scenario by which you can assume the usual null hypothesis for calculating p-values?

There is none; these are meaningless calculations.

classhour. (Yes, it was an Ivy, but still.) I understand that my school is now up to about $35K/year now; I don't know how I would do that - $22k/year nearly knocked me out of the game. (My parents were not in a position to pay for that kind of education for me, so I had to do it myself.)Ten years later, I'm starting to think law school isn't a bad idea, and I'm again wondering how I'll pay for it.

"You don't need a random sample when you have every case in a whole population to analyze."So why would you need inferential statistics?

=0=, you pay for law school with your vastly increased income. Of course, for your income to increase vastly, you have to be making very little now, and you need to make a lot when you get out. (I was able to predict that my income, e.g., would very likely increase by at least a factor of 2, and quite possibly 3. That made the cost/benefit analysis pretty easy.)Yeah, part of the problem. I run a business now. My income, if I changed careers to become an attorney, would likely go up (but I certainly wouldn't think by a factor of 2-3; I don't do poorly now, and I don't think I can count on being *that* good.). The problem is that I'd have to sell (or, more likely, simply close) the business, killing my income. Classic catch-22, indicating that I should have done it earlier. I'm just a little bored doing what I'm doing now (bespoke software development). Anyway, this is pretty far off topic, so I'll stop now.

Stress on the word, "rebuttable."

I'm starting to see the hiring process in my public defender office. Applicants from "elite" schools almost always make the first cut when there's an opening. I'm told we had hundreds of resumes for our last opening, so that's a big leg up.

For example, the data aren't a random sample from a population, and none of the respondents are randomly assigned to control groups.

So what's the basis for statistical inference? To phrase it more explicitly, what's the scenario by which you can assume the usual null hypothesis for calculating p-values?

Random sampling from a population is necessary to isure proper external validity--i.e., to make sure that the conclusions you draw from the sample can be generalizable to the population. In this case, they do have data from all law schools, but it's data only comparing the change between 1993 and 2002. If they want their data to be generalizable to changes in any other years, then doing inferential statistics is appropriate. In fact, a more appropriate design may have been to look at data from

allof the years between 1993 and 2002 and use time-series analyses or Hierarchical Linear Modeling to look at the overall trends. This would have greatly increased the statistical power.Radom assignment to condition (experimental vs. control group) is a different issue--that of internal validity, or how conficent they can be in talking about [i]causality[/i] when they interpret their results. Since there was no random assignment, they should avoid infering things like "large student loan debt casuses differences in LSAT change between lower tier and higher tier schools.

There is absolutely nothing wrong with doing these analyses--you can still examine the statistical relations between these different variables, with a few caveats about interpretation of causality.

The bigger issue is that the analysis doesn't test what Morriss and his colleague think it tests. They did two seperate regressions and compared coefficients (esp. the student loan debt). To be able to statistically say that these coefficiants are different among the two types of law schools, it would have been better to do one regression, including two extra variables: (1) a contrast coded (-1 or 1)variable to say whether a school was top teir or not and (2) a variable that was the interaction of (by literally multiplying together) the top teir varibale and the student loan debt variable. If the coefficient of that interaction variable was shown to be significant, then that would mean that top tier and lower tier schools had statistically significant differences in how student loan debt was related to changes in LSAT scores.

"If they want their data to be generalizable to changes in any other years, then doing inferential statistics is appropriate."What they are doing is not even remotely appropriate then, because the years 1993-2002 are nothing like a random sample from the "population" of all years. Moreover, the units of analysis are

schools, not years.Look at the N's -- they are 120 and 44, not 9."In fact, a more appropriate design may have been to look at data from all of the years between 1993 and 2002 and use time-series analyses or Hierarchical Linear Modeling to look at the overall trends."How would this possibly have created a basis for doing statistical inference on the parameters they're looking at? Please, be explicit: Instead of tossing out buzzowrds, tell us

what would your model be, and how would you justify it?"Radom assignment to condition (experimental vs. control group) is a different issue--that of internal validity, or how conficent they can be in talking about [i]causality[/i] when they interpret their results."Causality is another issue, but it doesn't negate the inference problem. I don't dispute what you say about interpreting causality, but my point was that had they randomly assigned their subjects, that would have been a reason for doing statistical inference on their parameter estimates (in addition to providing more information about causality.) In other words, calculating p-values could have told them if the differences they observed were real, as opposed to a random result of how they assigned their subjects. That's a separate issue from causality.

"There is absolutely nothing wrong with doing these analyses--you can still examine the statistical relations between these different variables, with a few caveats about interpretation of causality."You can do the regressions, but to calculate p-values and standard errors is meaningless. If you disagree, I would ask you again to specify exactly what the model is, and where the randomness is coming from.

Real world now, what counts most (for law school applicants or anybody else pondering grad school, which seems mandatory) is GPA, then class rank, than anything like GREs/LSATs/MCATs, seemingly.

It's a hamster wheel, one nobody tells you about when you have to consider a college education in high school.

Something I see nobody consider when it comes to education is that problem. In high school, you're told basically that college is it. That, to get a decent job (at least to start with), you don't need to worry beyond the 4 years or so to a BA/BS.

It's never said til you get there that, no, these days, even entry-level stuff seems to require a Master's degree.

With the way college costs are, I'm surprised there's not a backlash.