A last (?) note about Easter and calendars

As background, see my posts about Easter and Christian calendars (first, second, third, and fourth).

Recall that Easter is defined as the Sunday after the full moon after the spring equinox. The spring equinox could happen on March 19, 20, or 21, but for Easter-calculation purposes, we define the spring equinox as March 21. Why do we do this? These days, we can figure out exactly which day is the equinox, and everyone will come up with the same answer wherever they are on the Earth. But I’ll grant that, in past centuries, this would have been less precise and could have led (oh horror!) to wrongly placing the equinox too early, so if the full moon and Sunday happened immediately, Easter could have been celebrated before spring started. Moreover, it’s not just a matter of observing the sun in the current year, but also predicting when the equinox will fall next year and the year after that, so you can plan Easter years in advance. So perhaps defining the equinox as March 21 was justified as a way of building in a conservative assumption as to the start of spring, and producing uniformity from year to year.

But by artificially producing uniformity from year to year, the method produces disuniformity from place to place. If we went just by the equinox, people around the world would basically agree when that happened; at most, they would be off by a day or two. Same goes for the full moon. So most people would agree on Easter if you just observed the heavens rather than defining the equinox as March 21 (or using tables for full moons). But once you define the equinox as March 21, you enshrine the 13-day (and growing) difference between the Western and Eastern church calendars. So, ironically, what you adopted to produce uniformity is actually a major cause of the disuniformity in Easter observance that we see today.

This puts me in mind of a story that circulates among economists. It’s about an economist, I forget who, but probably someone who does contract theory. Contract theorists think contractual incompleteness is a big problem, and this guy knew how people building their houses often get into litigation with their building contractors over something or other that wasn’t clearly specified in the contract. He determined to avoid this, and write a very detailed contract that specified a lot more contingencies than the standard form contract. Surprise: he ended up in litigation with the contractor over some contractual incompleteness.

If that were all, it would just be a story about our inability to make everything complete. It doesn’t mean we can’t or shouldn’t try; conceivably he could have lowered his chance of litigation significantly, even if it didn’t help him this particular time. But actually it’s (apparently, from what I hear) more interesting. Most contracts are less incomplete than we think, because a lot of the incompletenesses are plugged by the common law of contracts. There’s a whole developed caselaw on what happens in particular situations that crop up a lot when the apparent contract falls short. I say “apparent contract” because the paper you sign isn’t the whole contract; the whole contract is the paper plus the whole body of law — defaults and the like — that fills the gaps.

It turns out that much of this common law of contracts was specifically designed around a particular standard-form contract. When the economist junked the standard-form contract and wrote a whole new one, he also (perhaps inadvertently) junked the common law that went with it. The result was that the gaps became a lot larger, and litigation more probable. The very act that was meant to reduce contractual incompleteness ended up increasing it.

There’s an analogy in here about Whack-a-Mole, or, in other words, spontaneous vs. planned orders, but that’s left as an exercise for the reader.

Powered by WordPress. Designed by Woo Themes