pageok
pageok
pageok
Ellsberg paradox, take 3:

Back to the Ellsberg paradox (so-called, 'cause it's not really a paradox). Based on a bunch of previous comments, let me summarize where we're at, with a simplified version of the paradox.

There are three balls. One is red. For each of the other balls, someone flipped a fair coin to determine whether they would be white or black.

You can imagine a number of lotteries based on a draw from these balls. For example, consider the following four lotteries:
Lottery A: Win $100 if we draw a red ball.
Lottery B: Win $100 if we draw a white ball.
Lottery C: Win $100 if we draw a ball that isn't red.
Lottery D: Win $100 if we draw a ball that isn't white.

Do you prefer Lottery A or Lottery B? Do you prefer Lottery C or Lottery D?

(This is different than the previous example in the following ways: First, I've given a specific set of probabilities for white vs. black. Second, I've made it clear that I'm not offering any lotteries, just eliciting your opinion. Third, I've made the prize $100, just to be more specific.)

It turns out that most people prefer A to B, and prefer C to D. This is inconsistent with expected utility theory, which says your preferences over lotteries should only depend on what the ultimate probabilities are and the utility of the item. More below the fold, including the answer to the question: "Who cares?"

Joel B. (mail):
Sasha, I think you're making an unwarranted leap. I'd tend to prefer A to B and prefer C to D, but if I have to pay extra to have A over B or C over D, I won't. I don't care that much. I'd rather have one less thing to worry about (the random coin flip), but in this case it's value is very low. For many, probably not even monetary. It's just easier. Whatever value A has over B and C over D, exists (that'd be my choice preference), but it's not worth paying for.
10.11.2006 4:26pm
Sasha Volokh (mail) (www):
Fine, make it $100,000 instead of $100.
10.11.2006 4:28pm
Sasha Volokh (mail) (www):
Alternatively, if you're not willing to pay any extra money to get A over B, you should explain to me in what way you "prefer" A.
10.11.2006 4:29pm
Joel B. (mail):
I prefer it, in the sense that if someone asked "Which lottery do you want me to run" I'd choose, A and C. But if I had a ticket to B, I wouldn't pay to move over to A, I'd expect to be an even trade. I suspect however, that most would not want to make the trade because they also similarly prefer A to B in that they don't want to worry about the underlying unknown distribution. I don't like worrying, I don't know many people who do, and the unknown is something you "worry" about. Do I pay to alleviate of a derivitive in a lottery? I wouldn't probably some wouldn't but I wouldn't. Even at $100,000, the benefit I receive changing lotteries isn't worth something monetarily. But if I get to choose I'd choose A over B and C over D. Not every benefit is something I'd pay, even if I prefer it, or enjoy having it.
10.11.2006 4:35pm
JRL:
This comment:

I don't like worrying, I don't know many people who do, and the unknown is something you "worry" about.
supports my position that this comment is wrong:

It turns out that most people prefer A to B, and prefer C to D. This is inconsistent with expected utility theory

Sasha's analysis is based on expected cash, not expected utility. And as Joel has pointed out, you get utility (or don't lose utility) from not having to worry. Thus, under an expected utility theory you would prefer A and C.
10.11.2006 4:44pm
Gary McGath (www):
Is there some subtle issue I'm missing? If you work out the probabilities, A and B each give a 1/3 chance of winning, and C and D give a 2/3 chance of winning. So A and B have equal average payoffs, as do C and D. The coins are flipped even if the black and white balls don't affect the payoff, so there's no difference in setup effort. Is there some other sense of "preference" that's relevant here?
10.11.2006 4:45pm
FFalcon:
Sasha, What you call a 'so-called paradox' is actually called a 'paradox'.
http://dictionary.reference.com/search?q=paradox&x=0&y=0
10.11.2006 4:51pm
Sasha Volokh (mail) (www):
FFalcon: Meaning 3, "any person, thing, or situation exhibiting an apparently contradictory nature," doesn't seem to apply, because it's not contradictory to itself, just contradictory to some other idea. Meaning 4, "an opinion or statement contrary to commonly accepted opinion," doesn't seem to apply, because if people really do commonly choose this way, then it's expected utility theory, not the Ellsberg "paradox," that's contrary to commonly accepted opinion!
10.11.2006 4:58pm
roy (mail) (www):
I'm with Joel B on this one. My "preference" (for B and D) was merely a tie breaker for options with equal utility. If I had to pay you extra to trade for a "preferred" ticket, it would no longer tie with the ticket I already have, and my "preference" would not apply. I suspect that's how it works for most people, at least those who can do the math.

And, to keep up the tradition of completely missing your point, I'll mention that I have to consider the probability of you actually paying on your "promise" as opposed to the probability of the lottery paying the winners, which seems to be a given.
10.11.2006 5:01pm
Sasha Volokh (mail) (www):
JRL: Your comment:

Sasha's analysis is based on expected cash, not expected utility.

is incorrect. Risk aversion is when you want to pay less than $50 for a coin flip to win $100. Say you want to pay $40. The "expected cash" is $50. But if you have a risk averse utility function, the expected utility to you is $40.

That's all fully consistent with expected utility theory. Why were you only willing to pay $40? Because the worry of not knowing what's going to happen is a real cost, and in particular, it makes you sad to the tune of $10. Under expected utility theory, the costs of worrying about outcomes are real costs.

In fact, when we're talking about the probability of a loss, people often pay money to avoid this "worrying cost." We call that insurance markets. An insurance company finds people facing this coin flip (e.g., a fire hazard that could destroy their house, which is similar to losing $100 on a coin flip) and offers to reimburse the loss (i.e., pay you $100 if the coin flip comes out wrong) in exchange for a fixed payment (e.g., $60).

This sort of "ambiguity aversion" is not accounted for in expected utility theory. But that's not because expected utility theory is incapable of talking about non-cash. On the contrary, the costs of worrying are at its core. Ambiguity aversion is just a form of non-cash cost that's not in the model.
10.11.2006 5:05pm
Sasha Volokh (mail) (www):
Roy and Joel B: O.K., but I'll just put you down as people who are indifferent and use a tiebreaker. You guys don't satisfy the paradox.
10.11.2006 5:06pm
Daniel K:
Another question:
In this scenario, are you just drawing one ball total, and resolving all of the lotteries? or is it a separate draw from among the three balls for each lottery?
10.11.2006 5:17pm
Dan Hamilton:
A &C. Why? You KNOW the Red Ball exists and that there are are 3 balls. You don't KNOW if there are White Balls or if there are Black Balls. Why take a chance. No Matter what is done with the other balls you Know that over all 1/3 of the balls picked will be Red.

The others by math may come up to the same odds. But we are NOT talking over 100 plays we are talking 1 play. It does make a difference even if that difference is you don't want to bet on a ball that might not even BE in the Bag.
10.11.2006 5:17pm
jvarisco (www):
People are stupid and irrational. This is news? Why do people play lotteries at all?
10.11.2006 5:20pm
Z Jones (mail):
Sasha writes:
Roy and Joel B: O.K., but I'll just put you down as people who are indifferent and use a tiebreaker. You guys don't satisfy the paradox.

I respectfully disagree. . . anyone who feels compelled to choose a "tiebreaker" is satisfying the paradox. Moreover, I would say that there is no rational basis for such a tiebreaker, and no reason why anyone should "worry" about the unknowns in B and D. For example, an irrational person could just as easily be more "worried" by choosing A over B. Perhaps they are more worried that, by choosing A over B, they had missed out on the "opportunity" to receive 2/3 odds instead of 1/3. Such worry, although it may be very real to that person, is not rational, and thus illustrates the paradox perfectly.
10.11.2006 5:20pm
Dave B (mail):
I may have just read this all too quickly and incompletely but I think in transposing there may have been a mixup.


"Or, if we make the coin flip slightly biased toward black, you should strictly prefer A over B, and you should strictly prefer C over D."


Your C example doesn't have its odds/utility changed by biasing toward finding a black ball. C has an example of a winner being a non-red ball. Changing the probability of black-to-white balls doesn't change the 2/3rds utility received from gambling on non-red. Whereas D, pick a non-white actually becomes more attractive the less probability there is to find any white balls.
10.11.2006 5:31pm
Lou Wainwright (mail):
Lump me with Roy and Joel as well. I'm indifferent, but that doesn't mean I wouldn't chose (although I'd choose A and D). I'm still at a loss why ambiguity aversion is paradoxical. Isn't it simply that expected utility theory is an incomplete theory to explain human behavior. You postulate above that people 'should' act acording to EVU, but they don't, that's not a paradox.

Further people do not always avoid ambiguity. Take someone typical who chooses A and C, and then give them this option:

Lottery E: Pay $1, flip a fair coin, collect $2 on heads.
Lottery F: Pay $1, flip 30 fair coins, collect $2 Billion if they are all heads.

If you set the EV of E=F you will find many of the people who chose A &C will now gladly accept the ambiguity of F. Further, in the real world, most people will accept a massive (30-50%) discount on EV to have the chance to play F.

To be fair, I've done a slight of hand of value for utility. Since utility is non-linear with dollars accross 9 orders of magnitude, it may be that the actaul EU of lottery F makes it the smart play. However, real world gambilng shows that for a wide variety of payout plans (including those where the EU of F is clearly equal to or less than the EU of E) people will still demonstrate an attraction to ambiguity. So again, isn't the 'paradox' just another way of saying expected utility theory is incomplete?
10.11.2006 5:32pm
Sasha Volokh (mail) (www):
Dave B: Thanks for pointing out the error.
10.11.2006 5:34pm
Sasha Volokh (mail) (www):
Z Jones: Whenever you're indifferent, you use some tiebreaker. There's no problem with its being someone utility-irrelevant, like "the one with the prettier colored ticket" or "the one on the left." It's true that these people do give a rationalization of their tiebreaker that appears to illustrate the paradox. But since they're not willing to pay for the difference, I treat their "worry" as so weak as to consider that they don't give it any real weight.

I worry about risks all the time, and that's why I buy insurance; if I say I'm worrying about a risk but I don't buy insurance, even if you present me with an easy to understand and actuarially fairly priced insurance policy, I think you'd be justified in discounting my claim to be really worrying.
10.11.2006 5:38pm
Sasha Volokh (mail) (www):
Lou: Yes, that's exactly what I'm saying -- expected utility theory is incomplete. I don't think this is a paradox, and I don't think people should follow expected utility theory in any normative sense. I don't believe I've said anything to the contrary -- in fact, I've agreed with you above.
10.11.2006 5:40pm
Dave B (mail):
Sasha, no problem. I mean, with your example I'm seeing all these people arguing in very analogical terms for something which in your example seems to me to be quite evident.

Now, I'm an not an expert, nor am I a statistician. This is just an analysis of where I see this from my naive, but hopefully logical, perspective.

In your example, Lottery A offers a guaranteed 1/3, no more no less probability. Lottery B offers a minimum of 0, maximum of 2/3rds -- If in a fair coin toss both are deemed white or black. While in Lottery C, your odds are a guaranteed 2/3. Finally, lottery D, has a rational chance of achieving 100% success at a minimum of 1/3.

For the range of percentages given, I don't think there is a linear level for the measure of the attractive nature of the gamble. Both B and D offer a range of probabilities but telling someone they have a bounded range of 0-66% versus 33-100% while mathematically the same wideness of range, the latter will almost always seem more attractive because the potential for 100% success exists in otherwise unknown trees for possible outcomes. There exists situations that give the appearance of 100%.

To me, this reminds me of the silly roulette problem. People who bet on Red because Black came up 5 times in a row. While it may be illogical to assume because each has a subsequent chance equal to the last 5 spins, people readily and excitedly believe otherwise.
10.11.2006 5:48pm
Gary McGath (www):
A lot of gambling choices are based not on ambiguity-preference or ambiguity-averseness but simply on lack of understanding of probability theory. How many people know how to calculate whether the one-flip or the 30-flip choice gives a greater expected return? Many will just say, "A billion dollars? Wow!"
10.11.2006 5:48pm
Marcus1 (mail) (www):

Then I'll offer to take C off your hands in exchange for B plus cash.


I must have missed the point. Why would a C-holder pay for B when C is a 2/3 chance of winning and B is only a 1/3 chance of winning?

I prefer lots of things that I probably wouldn't pay for though. There are always transaction costs, like having to make sure I'm not getting screwed, or having to reach for my wallet.
10.11.2006 6:11pm
PDXLawyer (mail):
Seems to me that ambiguity aversion is rational if: (a) you are risk-averse, and (b) you are not 100% confident of your ability to correctly analyze the ultimate result of the more "ambiguous" (ie, more analytically complex) situation.

In the real world, we see this all the time. Most people (even most rather successful business people) are not very good at doing the calculations that those on this blog take for granted. Moreover, in the real world, situations are rarely so sharply defined, so that the risk of misunderstanding the lottery is significant (Note that Sasha had problems with this even for this exceedingly simple example). Also as a business lawyer I have observed that a good many scams are based on the hope that those transacting with the scammer will make a calculation error. One way of signaling an honest business intention is to offer transactions which are unambiguous and easy to analyze as possible.

If you look at calculation error as a risk in itself, ambiguity aversion is perfectly rational.
10.11.2006 6:12pm
Duffy Pratt (mail):
People often have preferences for different tokens when playing Monopoly. I tend to like the shoe. How many people would pay to have the token of their choice? Even in Monopoly money at the beginning of the game?

Maybe one of the problems with utility theory is the idea that all human whim can be translated to dollars and cents?
10.11.2006 6:12pm
Jake (Guest):
Again, it seems to me that "ambiguity aversion" simply giving a new name to the same underlying phenomenon we see in "risk aversion".

A risk-averse person avoids ambiguities in his potential payouts. An ambiguity-averse person avoids the risk of getting involved in a losing game.

Somebody who is ambiguity averse is really just risk-averse, it's just that he doesn't like the risk of playing a game where there's no chance of winning.

Ambiguity aversion looks less rational because you can set up a tighter arbitrage loop (although an informed player can make profits off of risk averse people too), but if put in a non-arbitrageable context it won't look so silly. In real life situations it doesn't seem crazy to prefer a fair game where you have a 1/3 chance to win over a game with a 2/3 chance of being rigged against you and a 1/3 chance of being rigged in your favor.
10.11.2006 6:15pm
Sasha Volokh (mail) (www):
Marcus1: Sorry, that was my second example, unrelated to the lottery with the three balls. It was an example where transitivity was violated, i.e., you had A > B > C > A. My bad for calling those options A, B, and C, rather than X, Y, and Z.
10.11.2006 6:18pm
Sasha Volokh (mail) (www):
PDXLawyer: Yup, nothing "wrong" with ambiguity aversion. Just contrary to expected utility theory, is all I'm saying.
10.11.2006 6:20pm
Sasha Volokh (mail) (www):
Jake (guest): You can call it risk aversion if you like, but it's not risk aversion. Risk aversion is a term of art that only refers to risk over outcomes. However you cut it, these lotteries don't differ in their level of risk over outcomes. Now it's true that both phenomena reflect a desire for certainty in some form, but they're still distinct phenomena.
10.11.2006 6:24pm
ruidh (www):
People are notoriously poor at calculating actual probabilities. When given the choice between a simple outcome they understand (or think they understand) and a more complex one that happens to have the same expected outcome, people will pick the simpler one. It could be they suspect fraud in the complex example or they suspect that they are being fooled into picking a disadvantageous game because it is more complex.

There's a lot more going on than just expected outcomes.
10.11.2006 6:25pm
AEW:
"Sasha's analysis is based on expected cash, not expected utility. And as Joel has pointed out, you get utility (or don't lose utility) from not having to worry. Thus, under an expected utility theory you would prefer A and C."

Expected utility theory assumes that you "worry" about risk in a specific way. It's true that your attitude toward risk determines your preference between

A: 10% chance of winning $100 and
B: $10 for certain.

Here, the distinction between expected utility and expected cash is relevant. Expected utility theory assumes, though, that you're indifferent between

A: 10% of winning $100 and
B: 20% chance of getting a 50% chance of winning $100

This is the assumption that's violated by people's preferences in the Ellsberg Paradox.
10.11.2006 6:25pm
plunge (mail):
I don't see where the idea that A/C is more certain than B/D comes from. Individually, B and D may be uncertain, but taken together, they make your odds very certain. The choices are the exact inverse of each other. You might have worse odds in B than you did on A, but since the balls are the same in C vs. D, you are paid back for it by matching good odds in D. And vice versa.
10.11.2006 6:26pm
Jeff R.:
Can't we just state that there's a significant amount of disutility arising from extra anxiety during the period between purchasing the ticket (or even starting during making the decision whether to buy or not) and the resolution of the lottery?

(Also wondering if there's a way to make economists' collective heads explode with intransitive dice after reading the statement linking transitivity to (economic) rationality...)
10.11.2006 6:28pm
lucia (mail) (www):
I'm with Roy and Joel also! In the earlier problem, I though according to the game I was required to pick. I thought we weren't allowed to say "Give me whatever."

Under the current rules which specify that black and white balls will be represented in equal probabilities, I still prefer A /C over other choices. Yet, I would absolutely never pay even a vanishingly small amount of money for my choice. I wouldn't even hurry to the front of the line to get my choice.

I have a basis for picking A/C over others and I actually think it is related to utility-- but obscurely. My rules are:

1) First examine the utility and risk according to the rules as stated. Pick the highest utility or, as in portfolio theory, the highest utility adjusted for risk. Based on this , A/C, A/D, B/C and B/C are all equal.

2) If rule 1 results in a tie, gamble in a way that takes away any advantage of "simple minded" cheating on the part of the person operating the game. Do this even if cheating is claimed to be impossible." Based on this, I chose A/C. (Although, it's possible if I did a MS thesis amount of work on game theory I would pick something else.)

Still, as you see, I have no incentive to pay to change my choice of A/C because the moment the choice of A/C costs money, rule 1 kicks in. Applying rule 1, I select the cheapest of A/C, B/C, A/D, and B/D!

I think I'm using utility and risk in an obscure way and it may explain the desire to eliminate ambiguity based on the idea of utility and risk. When gambling, my acting in a way that eliminates the ambiguity in the probabilities generally (possibly always) reduces or eliminates the ability of an "opponent" to decrease my utility by violating the rules.
10.11.2006 6:30pm
plunge (mail):
"I still prefer A /C over other choices. Yet, I would absolutely never pay even a vanishingly small amount of money for my choice. I wouldn't even hurry to the front of the line to get my choice."

That doesn't make any sense. What can "prefer" mean if you wouldn't pay ANYTHING to get it?

Why couldn't the person cheat by not putting in a red ball at all if you are so worried about impossible cheating? If you attach ANY likihood to cheating being an option, then you SHOULD be willing to pay to do something you think can avoid it.

However, as I noted, the two lotteries being inverse of each other eliminates any need to worry about cheating just as they eliminate any extra uncertainty.
10.11.2006 6:36pm
Jake (Guest):
Sasha: you may say a word means exactly what you say it means and no more, but that does not cabin the range of ideas to which we may apply the term.

Consider bungee jumping. Given the choice, many people pay to go bungee jumping, while other would pay to avoid bungee jumping (as seen on various reality tv shows where cash consequences are attached to a failure to do so).

Would you consider it more useful to call the people who would pay to avoid bungee jumping "risk averse", or to call them "equipment-failure averse"?

I'm just not clear on why we have to come up with a new term when we discover that people avoid the risk of playing a bad game, when we have a perfectly good term that we use to describe people who avoid the risk of a bad outcome.
10.11.2006 6:48pm
Duffy Pratt (mail):
"That doesn't make any sense. What can "prefer" mean if you wouldn't pay ANYTHING to get it?"

Prefering a thing X over Y means that, given a choice between X and Y, with everything else being equal, you would take X.

The mistake is to think that in all cases there exists a sum of money Z, so that a person who prefers X over Y, would then prefer (Y plus Z) to X. In this case, people are also saying that with all things being equal, they prefer X to Y, but then, if the choice was given between X and (Y plus Z), where Z is any sum of money whatsover, that they would always prefer Y plus Z. Both of these things are not only possible, but fairly common.

Not understanding them means buying into the same basic mistake of economics, which is that everything can be translated to monetary value. It just isn't so.
10.11.2006 6:54pm
lucia (mail) (www):
Plunge,
That doesn't make any sense. What can "prefer" mean if you wouldn't pay ANYTHING to get it?


Easy. If I get to the front of a line, and someone says, "Would you like A or B", I prefer A enough to intentionally utter the words "I'll take A". My alternatives are to utter the words, "I'll take B", or "Whatever is easiest for you." None of the utterances "cost" me anymore than the others.

You may not consider this a "preference" but I do.

On your next question:
In most gambling games, an opponent who doesn't put in any red balls at all would be easily caught. The person who cheats by fixing which are white or black would have plausible deniablity and could say they didn't cheat. I'm assuming not putting in red balls is "difficult cheating". (Yes, hard and easy cheating are ambiguous and depend on details Sasha didn't specify.)

On to your claim the problems are exact inverses, they aren't. If you actually run the numbers from the point of view of the opponent cheating and selecting white vs. black balls, you will see the if I flip a coin to pick A vs B and C vs D, the opponent can gain an advantage by selecting the colors of the white and black balls.

If I systematically pick A/C I can prevent him from gaining an advantage by fixing the black and white balls.
10.11.2006 7:00pm
plunge (mail):
"The mistake is to think that in all cases there exists a sum of money Z, so that a person who prefers X over Y, would then prefer (Y plus Z) to X. In this case, people are also saying that with all things being equal, they prefer X to Y, but then, if the choice was given between X and (Y plus Z), where Z is any sum of money whatsover, that they would always prefer Y plus Z. Both of these things are not only possible, but fairly common."

Only if you simply fail to understand what you are talking about. This is a description of perfect indifference, not preference.

I think what people here are doing is trying to find some way to rationalize behavior they've now realized looks somewhat odd or irrational.

"Not understanding them means buying into the same basic mistake of economics, which is that everything can be translated to monetary value. It just isn't so."

The fact that you can do is a basic outcome of there being any logical coherence to people's preferences.
10.11.2006 7:03pm
deweber (mail):
There is one important assumption that is made in any analysis of probability: The experiment will be repeated "many" times. An experiment done only once is not amenable to probabity analysis. This is not irrelevant in this case. If a subject sees this as a one-shot test, then the probability in probability will affect choices. If it is a one-of-many test, then the same problems may arise, but the subject who does not use probability is headed to the poor house(see Vegas).
10.11.2006 7:04pm
Fitzwilliam_Darcy99:
At one level the paradox demonstrates a violation of the hypothesis that people will act to maximize expected utility, but it doesn't strike me as surprising. The hypothetical is highly artificial and not consonant with any situation that an ordinary human being can expect to meet. It seems to me to be a very understandable instinct to disbelieve in the assumptions that we are told to make in this problem. A survival oriented animal, faced with a bizarre situation, will suspect a trap and act on the supposition that all is not what it appears to be. The instinct to play for certainty in this bizarre and artificial game seems to me a chip off the same block. I'm rather pleased that my fellow sapients feel that way: suspicion of a bizarre game we are asked to play is a sounder survival strategy than meekly playing along.
10.11.2006 7:07pm
uh clem (mail):
The apparent paradox is that most people can't compute probabality. Which is not really a paradox in my mind.

I don't prefor A to B or B to A, since by my calculation the probabality of winning either is the same. Likewise C and D.

Most people aren't like me. Yeah, I'm used to that by now.
10.11.2006 7:11pm
roy (mail) (www):
Plunge,

It's like I'm giving you the choice between the $100 in my left hand and the $100 in my right hand. You're going to pick one of them. That doesn't imply that you'd give up some of the money in order to get the money from that preferred hand.
10.11.2006 7:12pm
PDXLawyer (mail):
Sasha:

Maybe "ambiguity aversion" is just an example of truncated rationality, which relates to transaction costs. The reason it appears a paradox is that the example given seems to assume that transaction costs are negligible (as does the line of argument - "I prefer it, but I'd not be willing to pay any money").

I am the only practicing business lawyer I know who can even *calculate* expected values and apply this sort of theory to real life situations (eg, should we settle before summary judgment? - an inherently inexact, but nevertheless illuminating exercise in most cases). From the standpoint of my clients and colleagues it is a real risk whether I got all of this egghead mumbo-jumbo right. So, I still don't think it is *contrary* to expected value theory, just an example of the problems which can arise when economists neglect transaction costs (similar to when engineers ignore friction, or legal theorists ignore legal fees).
10.11.2006 7:16pm
plunge (mail):
"You're going to pick one of them."

So? To say that I prefer the left hand mean that I will pick it for some REASON, and that this preference is actually worth something to me.

"That doesn't imply that you'd give up some of the money in order to get the money from that preferred hand."

Think about it another way: which hand you get will be selected randomly. Would you pay any amount of money to be able to influence the choice? No. If so, you don't HAVE a preference for one or the other and its misleading to say that you do.

Also .9999bar = 1 :)
10.11.2006 7:18pm
liberty (mail) (www):
I'm just waiting for you to get rich off this. You are quite correct, so what's your business model, precisely? Just as many take advantage of differential prices to make a buck, you can take advantage of the "ambiguity aversion" because for you there is nothing ambiguous about it. I want stock.
10.11.2006 7:20pm
KeithK (mail):

Not understanding them means buying into the same basic mistake of economics, which is that everything can be translated to monetary value. It just isn't so.

i don't really consider it a fundamental mistake of economics. It's simply a consequence of trying to mathematically model a complex phenomena. The only way to do it is to make certain assumptions like "everything has a monetary value". While this assumption may not be perfect, it can be effectively used to model a wide range of human behavior. That makes it usefulo.


The mistake is to think that in all cases there exists a sum of money Z, so that a person who prefers X over Y, would then prefer (Y plus Z) to X. In this case, people are also saying that with all things being equal, they prefer X to Y, but then, if the choice was given between X and (Y plus Z), where Z is any sum of money whatsover, that they would always prefer Y plus Z. Both of these things are not only possible, but fairly common.

You're saying that someone could prefer X over Y, but would prefer Y + $0.01 over X. From an economics perspective that means that the preference is simply very small, but the overall assumption is still reasonable. I fully accept that there are cases where the monetary assumption fails, but your example doesn't seem to be one.
10.11.2006 7:20pm
Joel B. (mail):
I think what people here are doing is trying to find some way to rationalize behavior they've now realized looks somewhat odd or irrational.

For the record, I am not perfectly rational, and I am completely okay with that. I would not want to be Homo Economicus, but that's just me. Are there often good reasons for why I do what I do, sure, not always, but the vast majority of the time.

In this case, I just prefer not having to think about the pre-drawing randomness. After all, if the pre-drawing randomness is .5(2) + .5(0) Why should I care if I just take the post drawing amount of 1(1). It seems silly not too, it's just another layer of gambling. Why bother?

It's not however a perfect indifference, if given a truly uninhibited choice I'd choose A, to say that that's a "perfect indifference" if I then wouldn't choose to pay (any amount) to exchange my B ticket to an A ticket, it seems like we're redefining preference to an accountant's definition, well it's an immaterial preference (okay I can accept that but immaterial =/= 0.) Part of it may be, there is a transaction cost involved, if I say I want ticket A, then am given ticket B, and Sasha wants to sell me an A ticket for B+$.01, I'm still going to feel like I'm getting screwed by Sasha. Instead of desiring to "take the transaction" I'm just going to hold on and forget it, my mental state changes I become fixed in my preference. Now if Sasha say wanna trade a B for an A, well I'd probably take it, I mean why not, but I suppose it depends on my trust towards Sasha. (What does he know that I don't). This is based on my general assumption that most other people do not tend to engage in transactions with me unless it is to their benefit, often it is to our mutual benefit, but not always, and that's where I'm most suspicious.
10.11.2006 7:21pm
liberty (mail) (www):
PDXLawyer,

But what transactions costs? There are not always transactions costs, and when there are they are not always as high as the differential preferences or prices, as many an importing company knows.
10.11.2006 7:22pm
Ilya:
This might come across as snarky, or even downright rude, but "ambiguity aversion" seems like an academic way of saying people are lazy and/or stupid.

Seems to me that people would prefer choices A and C because they are simpler to understand. Choices B and D may be the same from an expected payoff but suffer from having to pay a complexity tax.

Since I believe that most people ARE lazy and/or stupid, it all makes perfect sense to me.
10.11.2006 7:25pm
Joel B. (mail):
Ilya -

I want to know in what way it demostrates laziness or stupidity to do extra work for no reason. Or that is, why pay a complexity tax when you don't have to?

"snark on" Since I beleive most academics are unnecessarily complex and/or wasteful, it all makes sense to me."snark off"

(Just trying to be a little humorous.)
10.11.2006 7:31pm
PDXLawyer (mail):
Ilya:

You have correctly identified the basis of the information economy.
10.11.2006 7:35pm
plunge (mail):
Joel, that's a lot of words that don't manage to get anywhere. You still haven't described why it makes any sense to say you prefer A over B when you can't explain what it's worth to you.

As in my example, if you wouldn't be willing to pay ANY amount of money to take a choice over a random selection of something, then I don't see how you can say you prefer it in any sense.

You can say you are averse to extra thinking or ambiguity, but what that MEANS is that on some level, you would be willing to pay money not to have to think. Admiting to that would be perfectly rational. Thinking is time consuming and sometimes stressful (though it could equally said to be rewarding) and that has an opportunity cost. However, that's not really how the question is framed: if the payoff is raised to 5 billion, then I think you would probably be motivated to think about the problem at that point. Posing the problem implies that you care.

Making up all sorts of extraneous extra details isn't part of the question in the same way that inventing non-supplied motives and tricks in the Monty Hall dilemna isn't really answering the question as given.
10.11.2006 7:36pm
Dave B (mail):
Meh, I'd take choice D easily.
Ball possibilities given equivalent distribution:
R = Red W = White B= Black
Lottery D is finding a non-white ball:
R W B
R W W
R B B
R B W

RWB and RBW can be combined because they are equivalent in a randomly distributed ball container.

2 / 4 times I have a 2/3's probability of winning.

1 / 4 times I have a 3/3's probability of winning.

1 / 4 times I have a 1/3's probability of winning.

Who wouldn't pay for the quote from Anchorman "50% of the time it works all the time" (in regard to the cologne made from real panther!) In this situation "25% of the time your bet will win all the time." Given infinite bags and infinite trials :-P Right?
10.11.2006 7:43pm
lucia (mail) (www):
Plunge:
Also .9999bar = 1 :)


I'm not even touching the 0.99999bar issue, but leads me to ask this question:

Are you under the impression the common idiomatic usage of "preference" is somehow rendered obsolete when economists decide to use the word in a specialized mathematical way?

If, presented with a choice to "push button A" or "push button B", and yet consistently chose to place my finger on button A and push I'm pretty sure most non-economist would say I "prefer A".

The fact that I would switch to B for a vanishingly small, but nonzero, amount of money, doesn't make A not a preference in the idiomatic sense.
10.11.2006 7:49pm
Pendulum (mail):
A=B and C=D in pure mathematical value.

You should choose between A and B depending on whether you want to increase or decrease variance. There is no 'should' about which is preferred; it depends on factors external to the problem.

Enough said, no?
10.11.2006 7:56pm
plunge (mail):
"Are you under the impression the common idiomatic usage of "preference" is somehow rendered obsolete when economists decide to use the word in a specialized mathematical way?"

No, I'm thinking that it must have some meaning, as opposed to having no meaning at all, which is basically how you are defining it.

There is no such amount of money as a "vanishingly small but non-zero" amount. Amounts aren't vanishing anywhere: they are specific amounts. Interestingly, that's a key point in why .9bar = 1 too. Specific numbers are "going" anywhere. :)

I think you just don't LIKE the idea that you can convert any statement of preference into some measure of value.
10.11.2006 8:08pm
lucia (mail) (www):
Pendulum: The variances can be calculated based on information provided in the problem as posed. Your principle sounds correct to me. However, it doesn't help because VarA = VarB and VarC=VarD. (At least that's what I got.)

I suspect that's why Sasha and others find the problem interesting instead of trivial.
10.11.2006 8:08pm
plunge (mail):
Sorry, that should be "Specific numbers AREN'T going anywhere."
10.11.2006 8:21pm
liberty (mail) (www):
Actually, I think that there are already a lot of people making money off this basic recognition of irrational choice. The State is probably the biggest, with Indian reservations close behind. Lottery and gambling take advantage of irrational probabilistic reasoning. Rationally, one would never choose to purchase a lottery ticket given the probabilities multiplied by the reward - for a probability of 1/10E8 * winnings of 10E5 dollars is not a good choice compared to a probability of 1 of a dollar amount of 1. One is worth $1 while the other is worth 0.0001 dollars.

Yet it is not ambiguity that influences the choice - or rather, ambiguity is desired in this case, its called hope and its worth a lot of money to some people.
10.11.2006 8:42pm
lucia (mail) (www):
Plunge.
I think you just don't LIKE the idea that you can convert any statement of preference into some measure of value.


Don't like it? I think you are flat out wrong. People can have preferences that cannot be transformed into economic value. Plenty of people have tried to explain it to you. You keep trying to prove you are correct about reality using proofs based on your mathematical model. That's backwards. In science, models are validated by reality.

That said, even though I think there are preferences with zero economic value, I explained the what I think is a tiny economic value of prefering A/C to A/D in my earlier long winded comment. I simply believe, that economic value of A/C over A/D is so very small it can only be used as a tie-breaker when two choices otherwise appear to be equal in utility and risk. Rationally one would pick A/C to (A/D + 1 cent.) (This assumed they net the 1 cent; netting zero puts us back to picking A/C.)

So my general idea is there are some economic values so small that they are exceeded by any vanishingly small positive definite value to be used in a tie breaker. You may think that can't possibly be. However, I think you may be suggesting what I suggest can happen because one vanishingly small thing can't be smaller than another vanishingly small thing.

If that is what you meant, it's simply incorrect.

After all engineers deal with limiting cases that require us to solve things that appear to say y = 0/0 all the time. [Simple example from highschool calculus: Find y=sin(x)/(1-cos(x)) as x->0 ] Near zero of the vanishingly small things is smaller than the other vanishingly small things! Happens all the time in physics, why not in economics?
10.11.2006 8:48pm
plunge (mail):
"People can have preferences that cannot be transformed into economic value. Plenty of people have tried to explain it to you."

Merely by torturing the English language. It doesn't mean anything in any sense of the word "prefer" if you place NO VALUE AT ALL on choice A, but then turn around and say you prefer it. This is no different than insisting that you are a spacealien when you are not. It's isn't an explanation for anything, it's just asserting that you can contradict youself. Ok, so you can contradict yourself.

"You keep trying to prove you are correct about reality using proofs based on your mathematical model. That's backwards. In science, models are validated by reality."

I'm not trying to use any particular mathematical model: YOU are trying to come up with some meaning of the word "prefer" that is compatible with "it doesn't matter to me in any tangible way whatsoever." You can't do it. It doesn't make any sense.

"So my general idea is there are some economic values so small that they are exceeded by any vanishingly small positive definite value to be used in a tie breaker. You may think that can't possibly be. However, I think you may be suggesting what I suggest can happen because one vanishingly small thing can't be smaller than another vanishingly small thing."

No, simply that there is no such thing as a "smallest non-zero thing." You cannot express a value for A that is non-zero but also smaller than any other amount I could later propose. 1 minus .9bar is not "a number infinately close to 0 but not quite 0" It IS zero.

"After all engineers deal with limiting cases that require us to solve things that appear to say y = 0/0 all the time."

Good grief. LIMITS ARE NOT NUMBERS. Figuring out a limit is not the same thing as specifying a value.

"[Simple example from highschool calculus: Find y=sin(x)/(1-cos(x)) as x->0 ] Near zero of the vanishingly small things is smaller than the other vanishingly small things! Happens all the time in physics, why not in economics?"

That's not what's happening in any case. You are asserting that there is a number that has both the property of zero and also the property of a non-zero number. That's, simply put, nonsense.

If you assert that A is somehow preferable to B, then there must be SOME VALUE that you place on A that B doesn't have. If there isn't, then what can you POSSIBLY mean by you preferring A over B? And if so, then there must be some value that is slightly more and some value that is slightly less than that value, all non-zero.
10.11.2006 9:08pm
Jeff R.:
If you assert that A is somehow preferable to B, then there must be SOME VALUE that you place on A that B doesn't have. If there isn't, then what can you POSSIBLY mean by you preferring A over B? And if so, then there must be some value that is slightly more and some value that is slightly less than that value, all non-zero.

The problem here is that money is quantized while value is not. There is no amount of money smaller than a penny outside of a gas pump, and no amount smaller than a mil (0.1 penny) even there. So you can have things (like a preference for A over B) that do exist in the value world but can't be expressed in the money world because there are no increments small enough...
10.11.2006 9:16pm
Ken Arromdee:
Lottery A offers a guaranteed 1/3, no more no less probability. Lottery B offers a minimum of 0, maximum of 2/3rds

You're confusing scenarios:
1) In choice A, you get exactly 1/3, no more, no less. In choice B, you may get anywhere from 0 to 2/3 but with an expected value of 1/3.
2) In choice A, you win with a probability of 1/3. In choice B, you have a probability of winning that ranges from 0 to 2/3 with an expected value of 1/3.

In the first scenario, the two choices have the same expected value, but different risks, so you might prefer A to B.

In the second scenario, the two choices have the same expected value *and the same risk*. They are, in fact, identical choices. They're only presented differently. Any probability 1/3 win can be split up into several different outcomes, some with 0 probabilities of winning and some with 2/3 probabilities of winning.

The paradox comes when people treat the second scenario like the first. They know that they prefer exactly 1/3 over a random choice that averages out to 1/3. They falsely conclude that that is similar to the scenario "prefer a probability of 1/3 over a random choice that averages out to a probability of 1/3".

It's like another poster's example: do you want a 10% chance of winning, or do you want a 20% chance of a 50% chance of winning? There is no rational basis for distinguishing between the two even if you are risk averse, because they have exactly the same risk and are the same scenario with two different presentations.
10.11.2006 9:51pm
plunge (mail):
"The problem here is that money is quantized while value is not. There is no amount of money smaller than a penny outside of a gas pump, and no amount smaller than a mil (0.1 penny) even there. So you can have things (like a preference for A over B) that do exist in the value world but can't be expressed in the money world because there are no increments small enough..."

Simply put, that's irrelevant both in economics, logic, and in practice. If you are stickler for only allowing strictly denominated currency in the discussion (which is a silly restriction in any case, but I'll humor it), all we have to do is arrange, say, chances on another lottery with the same odds slightly tweaked as an incentive.
10.11.2006 9:52pm
Duffy Pratt (mail):
Plunge, how much money would it take for you to stop believing your silly idea that every preference has an economic value?

Or lets put it this way: which movie do you prefer, The Godfather or Caddyshack. If you prefer the Godfather, how much would it cost to make you prefer Caddyshack? Is this use of the word "prefer" without meaning?

Let's go back to my Monopoly example. I know lots of people who reliably will choose the same token at the start of a game if given a choice. The same goes for choice of colors in backgammon. Some like playing black better, and some like playing white more. At the same time, they know that their preference is not worth anything, so they wouldn't give up anything in order to get their way. This is neither random, nor is it indifference, its just a preference which carries no economic value.
10.11.2006 10:29pm
liberty (mail) (www):
"If you prefer the Godfather, how much would it cost to make you prefer Caddyshack? "

Give me $100 and I'll watch Caddyshack even though I prefer Godfather. Money doesn't change your initial preference but I prefer (watching Caddyshack + $100) to (watching Godfather + $0) for a given night's entertainment. Because then I can take my $100 and go do something to make me forget the horrible Caddyshack movie.

As for Monopoly, there is a certain pleasure in using the piece you chose. What is it worth? Eh... for me not very much. Give me $20 and I'll forget about it.
10.11.2006 10:42pm
plunge (mail):
"Plunge, how much money would it take for you to stop believing your silly idea that every preference has an economic value?"

What, you're giving up defending one silly idea to start suggesting a different misunderstanding of "perfer"?

"Or lets put it this way: which movie do you prefer, The Godfather or Caddyshack. If you prefer the Godfather, how much would it cost to make you prefer Caddyshack? Is this use of the word "prefer" without meaning?"

It's somewhat nonsensical, yes. It would make sense if what you said was how much it would cost to make me prefer watching Caddyshack GIVEN that I prefer the Godfather. Who said that preferences themselves for things would change when bribed? That's not what anyone is saying.

"Let's go back to my Monopoly example. I know lots of people who reliably will choose the same token at the start of a game if given a choice. The same goes for choice of colors in backgammon. Some like playing black better, and some like playing white more. At the same time, they know that their preference is not worth anything, so they wouldn't give up anything in order to get their way. This is neither random, nor is it indifference, its just a preference which carries no economic value."

Nonsense. If they care enough to pick that piece, then they value it, even if just a little. That's what it MEANS to like pickng that piece, after all. There has to be SOME value beneath which you'd say "you know what, I could care less about that miniscule value: I'm still picking the shoe because I like it and your bribe is totally meaningless to me."
10.11.2006 10:46pm
Duffy Pratt (mail):
"There has to be SOME value beneath which you'd say "you know what, I could care less about that miniscule value: I'm still picking the shoe because I like it and your bribe is totally meaningless to me."

Why does there have to be? Why can't there be all sorts of things that are not measurable in terms of economic value? So far, all you say is that otherwise the idea of a preference is meaningless. But it perfectly easy to test a preference. Ask a person, or test them by giving them them a choice in several circumstances, and see if they act consistently. You don't need economics to understand preferences.

On the other side, there are preferences that are so strong that no economic value can be put on them. Some things are not bribable for some people. If its possible for people to have preferences that are so strong that they can't be measured in economic terms (and not only is it possible, but it occurs all the time), then why is it not possible for people to have preferences that are so weak that they cannot be measured by economics? Why can't a preference exist which economics simply does not model?
10.11.2006 11:11pm
lucia (mail) (www):
Plunge:
YOU are trying to come up with some meaning of the word "prefer" that is compatible with "it doesn't matter to me in any tangible way whatsoever."

I don't believe I've said that. I am saying preferences can't always be translated into cash value. The word "tangible" doesn't mean "can be translated into a cash value".

Plunge:
You cannot express a value for A that is non-zero but also smaller than any other amount I could later propose.

Let's shorten the negotiations: You propose any value that causes me to net a positive definite sum of money. I'll accept it. That's what I claimed I was willing to do in the first place; that's what I'm still willing to do.

Plunge:
Good grief. LIMITS ARE NOT NUMBERS. Figuring out a limit is not the same thing as specifying a value.

Who said limits are numbers? Who said specifying a limit is specifying a number?

Finally: Plunge, I think you are simply wrong about saying there must be a value difference translatable into money for one to have a preference. If you are correct, you need to prove it--or at least support the claim.

You seem to be making a circular argument--or at least I think you are. This is what it sounds like:

Your claim: There are can be no preferences that can't be translated into cash values.

Several people have posted things they call preferences that can't be translated into cash values. They a) explain why the thing is commonly called "a preference" and b) show the preference has no cash value. If true these falsify your claim.

You reply: " Those aren't preferences because they have no cash values. This proves there are no preferences no cash values."

Unfortunately, that's a circular argument. The first statement relies on the truth of the second statement. The second is supposedly proven by the first.

Maybe I'm misunderstanding you. If you do have a different proof, could you repeat it?
10.12.2006 12:36am
lucia (mail) (www):
I meant:

You reply: " Those aren't preferences because they have no cash values. This proves there are no preferences with no cash values."


Someday I will learn to proofread comments before clicking post. (That day is probably in the far distant future.)
10.12.2006 12:43am
Dave B (mail):
Ken Arromdee,

The series of lotteries C and D are negations of the 1st, they cannot have the same probability of winning as an inverse in the exact situation. If the probability of finding a red ball is 1/3 then the probability of drawing a non-red ball is 2/3.
10.12.2006 1:56am
DJR:
Plunge:

Imagine a gumball machine with many colors of gumballs, all of which taste exactly the same. The machine costs one penny for a gumball. Next to that machine is another, that has all red gumballs, which taste exactly the same as the ones in the multicolored gumball machine, but the cost is two cents.

I happen to prefer red gumballs (I really do), but that preference is not strong enough to pay two cents versus one cent for a gumball. So there.
10.12.2006 10:12am
DJR:
Thinking about my own example a little further, here is a paradox for you: in some cases, money itself has no monetary value, or at least its value is outweighed by its burden. I was stumped to come up with examples other than a gumball because in many cases I would be willing to pay a penny for a color preference, but then I realized that I value a penny so little that it is meaningless in many transactions. If, for instance, someone approached me on the street and told me they would give me a penny for taking my next breath, I would refuse the offer. I don't have any particular preference for not breathing, but I do have a preference for not carrying pennies around.
10.12.2006 10:36am
BBB (mail) (www):
A Math problem:

Q. Suppose that night is day and day is night. Does the sun shine during the night and is it dark during the day?

A. No, it is still dark at night and the sun shines during the day no matter what you and I suppose.

The same applies here. The House will get an advantage over you by knowing how the game goes even if the rules say the House will not.

Therefore you should choose A and C because if the lotteries offered are A and B, then there will be very few white balls. If the lotteries offered are C and D there will be lots of white balls.
10.12.2006 12:26pm
Duffy Pratt (mail):
BBB, what if the lotteries offered are ABC or D. Sort of like craps, you can choose either side of any bet.

The way the house gets its money in casinos is by paying 5:2 on a bet on A or B, not usually be rigging the number of balls.

And with House lotteries (known as slot machines) they have some other methods to fix the odds in their favor, most notably the timing of payouts. That's how some Casinos can offer slot machines that pay back over 100%. (If the big payout comes only once every couple of years, they can invest the income between payouts and still make a profit while paying back over 100%)
10.12.2006 1:24pm
plunge (mail):
"Why does there have to be? Why can't there be all sorts of things that are not measurable in terms of economic value?"

See this is why I think you are just afraid of economics rather than have a good arguement as to why its not applicable to everything. Economics isn't really about money at all. It's about comparison, preference, and how people make choices between lots of different options.

"So far, all you say is that otherwise the idea of a preference is meaningless. But it perfectly easy to test a preference. Ask a person, or test them by giving them them a choice in several circumstances, and see if they act consistently. You don't need economics to understand preferences."

Right, but that consistency is based on some concept of valuing some things over others. That's exactly what economics studies.

"On the other side, there are preferences that are so strong that no economic value can be put on them."

Again, I would argue that this is simply naive. We DO put a finite economic value on human life. We do this every single day when we make choices that shortern or put at risk our own lives and the lives of others, and do so at a particular rate.

"Some things are not bribable for some people."

Like what? Again, you can claim that you value human life so much that you wouldn't accept anything to take a human life. But then you will jump into an automobile and drive away, which basically proves my point for me. That's the fun of economics.

"If its possible for people to have preferences that are so strong that they can't be measured in economic terms (and not only is it possible, but it occurs all the time), then why is it not possible for people to have preferences that are so weak that they cannot be measured by economics? Why can't a preference exist which economics simply does not model?"

These are interesting questions, but how do they in any way refute the answers I've already given for them?

Are you really saying that you'd change your preference for the shoe if I attempted to bribe you with a fingernail clipping?
10.12.2006 3:44pm
plunge (mail):
DJR: "Imagine a gumball machine with many colors of gumballs, all of which taste exactly the same. The machine costs one penny for a gumball. Next to that machine is another, that has all red gumballs, which taste exactly the same as the ones in the multicolored gumball machine, but the cost is two cents.

I happen to prefer red gumballs (I really do), but that preference is not strong enough to pay two cents versus one cent for a gumball. So there."

So there what? Again, economics is not about currency. Currency is just one tool applied in the world that you can think about economically. I don't have to limit my offers to the present level of currency. As another poster noted, carrying around and dealing with currnecy has a cost itself.

But what if I simply increased the distance you had to walk to get a red gumball for 1cent vs. the multicolor gamble? I'm pretty certain that I could find a distance at which you'd walk to get red instead of the multicolor machine, and then increase the distance to the point where you'd be unwilling to do so. That economics right there.

In fact, that value of distance is basically the whole business model of Starbucks coffee: being everywhere, even to a ridiculous amount is something their business relies upon precisely because most people value their coffee only marginally more than other coffee if at all, and being super-available is the only way to avoid losing countless customers.

For instance, did you know that if you convert everything you own into dollar bills and then burn them all, the world is no poorer for it (I guess other than the worth of the paper and ink destroyed).
10.12.2006 3:51pm
plunge (mail):
lucia "I don't believe I've said that. I am saying preferences can't always be translated into cash value. The word "tangible" doesn't mean "can be translated into a cash value"."

You need to make up your mind: are you talking about economics or are you talking about US currency. I'm perfectly willing to conceded that there are many things that cannot be directly and accurately measured in US currency. But I'm not sure what the point of that is. They can still be measured and modeled in economics. Currency is a particular specific tool used in economic exchanges. But economics is not fundamentally about currency. It's about how people deal with tradeoffs.

"Let's shorten the negotiations: You propose any value that causes me to net a positive definite sum of money. I'll accept it. That's what I claimed I was willing to do in the first place; that's what I'm still willing to do."

I have no idea what you are talking about here, sorry.

"Who said limits are numbers? Who said specifying a limit is specifying a number?"

You did, by implication. You tried to raise limits as an example in this discussion, which is about specific amounts, not limits.

"Finally: Plunge, I think you are simply wrong about saying there must be a value difference translatable into money for one to have a preference. If you are correct, you need to prove it--or at least support the claim."

I never said that this was about money, or if I did, I only meant money in the sense of anything of measurable value.

"Your claim: There are can be no preferences that can't be translated into cash values."

No, again: you can't have a preference if it doesn't mean that there is some reason you pick A over B that makes A more worth it than B. Otherwise you are indifferent.

"Several people have posted things they call preferences that can't be translated into cash values. They a) explain why the thing is commonly called "a preference" and b) show the preference has no cash value. If true these falsify your claim."

Except all those explanations have failed, as I've shown. And it seems like very few people have bothered to answer the questions I put to them.

"Maybe I'm misunderstanding you. If you do have a different proof, could you repeat it?"

I've already asked a number of questions that you haven't answered: try those first. And stop misrepresenting the argument. The argument about insisting on a CASH value is just setting yourself up for weaseling. What if tommorow we destroyed every piece of currency except 1000 bills. Would you start saying the day that happened that individual apples cannot have measurable values, since no one would buy an apple for 1000$?
10.12.2006 4:03pm
Duffy Pratt (mail):
"Right, but that consistency is based on some concept of valuing some things over others. That's exactly what economics studies."

The consistency doesn't have to be based on any other concept at all. A person can have a preference between two things, and the preference can both be arbitrary and consistent.

Also, you say economics studies these things, but instead of trying to explain human behavior, it sounds like you are trying to force human behavior to fit into some pre-existing theory.

As for stuff that is not bribable: What would it take for you to turn cannibal? Murder your parents? Tear an eighth month old fetus out of your wife's womb, killing them both, and then drink their blood? Do you think everyone has a price, in money or something else, that would get them to do any of this stuff? If economics tells you that getting into a car and driving off is the same as any of the above, then maybe you need to be studying something else.

"Are you really saying that you'd change your preference for the shoe if I attempted to bribe you with a fingernail clipping?"

Your question reminds me of the WC Fields joke where there is a contest. The first prize is a week in Philadelphia. Second prize is two weeks in Philadelphia. I'm saying I can figure out that there is a preference by asking people and seeing how they behave: by looking at the choice that they make. I don't have to engage in hypothetical bribes to see whether people really prefer what they prefer.

Another example: I prefer green M&Ms to the other colors. Given a choice from a new bag, when sharing with my friends, I will always pick green. I know they all taste the same, so I wouldn't give up anything I care about to get the greens. But I prefer them all the same. I know other people who feel the same way (but not necessarily about the greens). In other words, I have a preference. I also know it is an irrational preference, but I keep it nonetheless. But I would give it up in a heartbeat if keeping it forced me to give up something else I cared about.
10.12.2006 4:14pm
lucia (mail) (www):
Plunge,

If you are going to claim that simply introducing the concept of functions (which are not numbers) implies the claim that functions are numbers, that's just silly!

Plunge:
No, again: you can't have a preference if it doesn't mean that there is some reason you pick A over B that makes A more worth it than B. Otherwise you are indifferent.


I'm fine with only needing a reason for my preference. I always have been. I just wouldn't pay to get my preference. See my first quite long comment on this subject:
http://volokh.com/posts/1160593925.shtml#149493

I may have misunderstood your objection, but I think your claim was I could not have a preference unless I was willing to pay.
http://volokh.com/posts/1160593925.shtml#149500 )

I'd say more about your repeated use of both "pay" and "pay money", but the comments are all still there. The few people who care can scroll back and watch your evolution from "pay" to "get something tangible" to "have a reason".

Of course, if "have a reason" reverts back into "must be willing to pay money", we will be back to disagreeing. (And frankly, I'll be happy to simply agree to disagree.)

As to the rest, you have posted over and over there is only "preference" and "indifference". Interestingly, economists use three terms: "strict preference", "weak preference" and "indiference" as used by economists. I know for sure: only two things is not equal to three things!
10.12.2006 6:41pm
plunge (mail):
"The consistency doesn't have to be based on any other concept at all. A person can have a preference between two things, and the preference can both be arbitrary and consistent."

You're not making any sense. Either you prefer X over B, or you don't. If there is nothing that actually makes you value X over B, and there is no value you place on X over B, how in any sense is that a preference at all?

"Also, you say economics studies these things, but instead of trying to explain human behavior, it sounds like you are trying to force human behavior to fit into some pre-existing theory."

I don't see how so. It's basically just trying to apply logic to the question of preferences.

"As for stuff that is not bribable: What would it take for you to turn cannibal?"

The threat of the murder of my parents.

"Murder your parents?"

The threat of the murder of my entire extended family.

"Tear an eighth month old fetus out of your wife's womb, killing them both, and then drink their blood?"

If it would save the lives of a country.

"Do you think everyone has a price, in money or something else, that would get them to do any of this stuff?"

Some things people might not be psychologically capable of doing, sure. But that doesn't mean that we can't talk or explore how much they value certain things.

"If economics tells you that getting into a car and driving off is the same as any of the above, then maybe you need to be studying something else."

No, I don't think so. I think that you seriously just aren't interested in subjecting your decisions to any sort of deeper thought or scrutiny. You like poetic ideas like that life is infinately important, but that doesn't mean anything.

What does mean something is that while you'll claim that, say, your sisters life is worth infinately much to you, you'll take risks with her life at a certain rate that suggest otherwise. You might also value moral principles more than her life if say, you had to choose.

"Your question reminds me of the WC Fields joke where there is a contest. The first prize is a week in Philadelphia. Second prize is two weeks in Philadelphia. I'm saying I can figure out that there is a preference by asking people and seeing how they behave: by looking at the choice that they make. I don't have to engage in hypothetical bribes to see whether people really prefer what they prefer."

Um, you completely avoided the question. Nice try, but again: are you saying that there is really nothing so worthless I could offer you that you'd refuse to change your shoe preference?

"Another example: I prefer green M&Ms to the other colors. Given a choice from a new bag, when sharing with my friends, I will always pick green. I know they all taste the same, so I wouldn't give up anything I care about to get the greens."

You just already admitted that you did: you gave up the time and effort it takes to pick them out. :)
10.13.2006 2:19am
Duffy Pratt (mail):
Let's back up a bit. You seem to be saying that there must be some fixed idea of value before a preference has any meaning. I don't think that's true. I think the two are separate from each other. They may be related in some cases, but not necessarily in all.

You seem to be saying that suppose you have two packages, A and B. And suppose that some person prefers A to B. You insist that there will always exist two other packages, one is (B+P) and the other is (B+Q), such that the person will prefer A to (B+P) and the person will prefer (B+Q) to B.

There are two mathematical consequences to this idea. One is that the number of packages is always necessarily infinite. The second is that the group of packages has the same cardinality as the real numbers.

But I think the idea that there will always be such packages is either false or unprovable.

But either way, it seems clear to me on this understanding that value is a derived notion, and preference is a primary notion. Thus, you don't need to have any concept of value at all for preferences to have meaning.

Now, suppose I concede that you are willing to barter in murder. For your idea to be right, you have to show that everyone is willing to do so. There are people who refuse to make such choices.

Also, all of your tradeoffs have a moral dimension, for lack of a better word. It looks like with the murder questions, we are dealing with two different kinds of value, and its not clear that the one will translate to the other. Is there any amount of money that would get you to murder your parents?

Back to the monopoly shoe. I'm not saying that there is nothing so worthless that I would not turn it down. What I'm saying is that if you offered me anything that I would otherwise want, then I would give up the shoe to get it. I don't want fingernail clippings, and would rather not have them. So I'm not gonna give up anything to get them.
10.13.2006 9:15am