`pageok`
`pageok`
 `pageok` [Sasha Volokh, October 11, 2006 at 9:50am] Trackbacks Ellsberg paradox, take 2: Looks like my Ellsberg paradox post below was pretty popular — about two dozen comments just in the first hour, between 11 p.m. and midnight (Eastern)! I'll repeat the problem below, then give my explanation. If you haven't done so before, you may want to think about what you would choose before reading the explanation. There are three balls. One is red. Each of the others is either white or black. Now I give you a choice between two lotteries. Lottery A: You win a prize if we draw a red ball. Lottery B: You win a prize if we draw a white ball. Which lottery do you choose? (Mini-update: I allow you to be indifferent, if you want.) Now I give you another choice between two lotteries. Lottery C: You win a prize if we draw a ball that's not red. Lottery D: You win a prize if we draw a ball that's not white. Now which lottery do you choose? UPDATE: Just in case you're confused about this — and apparently some people were — we're talking about the SAME THREE BALLS each time. I haven't changed the balls. Nor have I drawn any balls. We haven't conducted any lotteries in the time it took you to read this post. All there is is a single box of balls, and me asking you your preferences over lotteries. (END OF UPDATE) UPDATE 2: You ask one of these questions, and you find out all sorts of aspects that you weren't expecting people to find important. This will affect how I phrase the problem next time, but for now, let me just clear up one extraneous aspect. I'm not running the lottery. I don't own the balls. I'm not offering a prize. Someone else, who isn't connected with me, is doing all that. I'm just asking questions about which lotteries you prefer. Also, as I mentioned in the first update, we don't draw any balls between your first choice and your second choice. In fact, we're never going to draw any balls. Why? I'm not running the lottery! I'm just asking questions! If you want to draw balls, take it up with the guy actually running the lottery, who is not me. (show) There are two points here, one theoretical and another practical. I'll give you the theoretical point now, and save the practical one for a later post. If you know about expected utility theory, you can skip this paragraph and the next four. Expected utility theory assumes that (to simplify) when you're faced with lotteries over, say, amounts of money, and each amount has some probability attached to it, and you have a utility-of-money function U, you choose which lottery you prefer based on the lottery's "expected utility," which is a kind of weighted average of the utilities of the different possible outcomes. So if I offer you \$1 if a fair coin comes up heads, then the expected utility is 0.5 U(\$1) + 0.5 U(0). (When I say U(0), that means the utility of however much money you already have; when I say U(\$1), that means the utility of that amount of money plus \$1.) Usually we assume people are risk averse, meaning they prefer the certainty of 50 cents. That would be U(\$0.5). So you would express risk aversion by saying that U(0.5) > 0.5 U(1) + 0.5 U(0). A risk neutral person doesn't care, as long as the lotteries have equal expected value, so he's got a different function U such that U(0.5) = 0.5 U(1) + 0.5 U(0). But whether you've got risk aversion, risk neutrality, or something else, expected utility theory always assumes that only two things matter: (1) The utilities of the outcomes and (2) the probabilities. No matter how complicated a set of lotteries I give you, you always reduce it to the ultimate probabilities over the outcomes. For instance, consider the set of nested lotteries: Lottery A = [Heads you lose, Tails you get to participate in Lottery B]; Lottery B = [Heads you lose, Tails you win \$100]. Expected utility theory says you crunch the numbers and figure out that this is identical to a single lottery where you win \$100 with probability 0.25. Everything else is irrelevant. Now consider the choice of Lottery A vs. Lottery B. Lottery A is the prize with probability 1/3. Lottery B is the prize with a probability that could be 0, 1/3, or 2/3. Whatever the true probability is (you can make assumptions where the ultimate probability is 1/3, for instance if each ball is black or white with a 50-50 probability — but it doesn't need to be that), ultimately you'll make some choice. Suppose it's A. Under expected utility theory, that can only be because you think red has a higher probability. If you think the probabilities are equal, then under expected utility theory, you must be indifferent between the two lotteries. If you choose B, under expected utility theory that can only be because you think white has a higher probability. Now go on to Lottery C vs. Lottery D. If you chose A the first time around, that means you think P(R) > P(W). But then you have to have P(not R) < P(not W). That's just mathematically true because P(not R) = 1 - P(R). So you can't prefer C if you preferred A. Nonetheless, most people chose both A and C. Mostly, they did so because the probability of R is a known 1/3, and the probability of not-R is a known 2/3, while the probability of W and not-W are kind of unknown. Note: This is not risk aversion, because the probabilities we're talking about aren't the probabilities of the ultimate prize. Rather, we're talking about the probabilities of what the probabilities are. This is called ambiguity aversion. Ambiguity aversion plays no role in expected utility theory, where only the ultimate probabilities (and the utility of the outcomes, which I've held constant here) count. Therefore, in this setup, most people make choices inconsistent with expected utility theory. Is this good? Bad? Irrelevant? Does it illustrate the crooked timber of humanity? The uselessness of expected utility theory? Stay tuned. (hide) Related Posts (on one page):A most ingenious paradox:Ellsberg paradox, take 4:Ellsberg paradox, take 3:Ellsberg paradox, take 2:Balls: (link) Guest44 (mail): I think the problem set-up is a little weak and contributes to the A+C paradox. If you had used the wiki entry with 30+60 balls in each urn, and made it clear that the same urn is used in both lottery pairs, it would have been more clear. Also, you didn't give an "indifferent" option. 10.11.2006 10:59am (link) plunge (mail): I think I've convinced myself that the "bah, always picking A and C is rational because I'm indifferent to the question of how likely white is over black" response is, in fact, mistaken. Explanation in the old thread. 10.11.2006 10:59am (link) Sasha Volokh (mail) (www): I did originally read about the problem with 30 + 60 balls, but I simplified it to 1 + 2 because really it makes no difference. Some people find it easier with 30 times as many balls? Plus, I figured "indifferent" is always an option when I ask "which do you choose (and why)?", but maybe people have a tendency to read more into these things than I ask. Finally, I didn't redescribe the problem between the lottery choices. Of course the balls are the same! Nonetheless, if people are confused, people are confused. Who am I to judge. I'll make it clearer next time. 10.11.2006 11:03am (link) plunge (mail): I should note that from wikipedia, both the colors of the balls is different as well as options C and D being the reverse of in your telling. 10.11.2006 11:07am (link) plunge (mail): In case people comparing them were getting confused by that as well when they went to "look up" the answer. 10.11.2006 11:08am (link) AF: Apparently the "paradox" consists in the empirical fact that people tend to prefer choices A and C. As I was indifferent in both cases, I saw no paradox. By the way, although 50/50 is not one of the given assumptions, isn't it the only reasonable assumption in the absence of additional information? If I tell you a coin can come up either heads or tails and ask you to choose one, shouldn't you be indifferent even if I don't tell you the odds are 50/50? 10.11.2006 11:24am (link) FantasiaWHT: I vote for illustrating the uselessness of utility theory :P As I said before, aside from the things it ignores that are listed above, it also ignores the fact that not everyone can calculate that the odds are the same, heh. 10.11.2006 11:32am (link) Tennessean (mail): Plunge: I wanted to follow up on the relationship between probability and information from the prior thread. Let's consider a situation where you are picking one ball out of a bag with a number of balls, all either red or black. Let's assume you know that there are the same number of red balls as there are black balls. Am I right in assuming that you would say that in that case, there is a 50% chance of getting a red ball when you pick one ball out? Of course, in fact, you will pick one particular ball, and it will have one fixed color, so in truth, there is either a 0% chance you will pick a red ball or a 100% chance you will pick a red ball. Moreover, whether it is 0% or 100% is likely already fixed (depending upon your views of physics, etc.) even if unknowable well before you stick your arm in the bag. While we may be considering the important data here to be the proportion of red balls in the bag, the actual result is fixed not by that limited data, but instead by the state of the world. I wonder whether we are confusing the "probability" tied to one, particular, actual draw with the expected measure of repeated instances of drawing? The example I set forth suggests to me that where we talk of what we know, we are talking statistics, and where we are talking what we do not know, we are talking probability. When we talk probability, words like "at random" mean "everything else being equal", and lionizing certain characteristics (e.g., the proportion of red balls) over others (e.g., the physical state of the world such that the draw of a particular black ball is truly certain) is anethema to probability. 10.11.2006 11:33am (link) Andrew Edwards (mail): Apparently the "paradox" consists in the empirical fact that people tend to prefer choices A and C. As I was indifferent in both cases, I saw no paradox. The interesting thing about this sort of stuff is that even though people 'know the answer' and can work out that they should be indifferent, they STILL have preferences. I bet if you took a bunch of people who did the math and decided they should be indifferent, and then FORCED them to choose between options, ideally with actual money on the line, the vast majority would pick A and C. Our reason is quite strong, but our instincts are pretty powerful too. When we face brainteasers we have a trained response to 'reason it out' but when we face real-life situations we often don't. So 'ambiguity aversion' often really will describe the actual real-life behaviour of real people. My boss in an old job had a PhD in psych, and part of his thesis work was proving something like this. Human beings have underlying heuristics we use that are still there even when we know the 'right' answer. He surveyed people about the gender of children. Essentially he gave them a couple who'd had a boy, then a girl, then a boy. He asked people to say whether they couple's next child would be a boy or a girl. Everyone knows that it's random. But when he forced people to pick one or the other, like 90% of them chose "girl", because of the heuristic of pattern repitition. 10.11.2006 11:33am (link) CB: Regardless whether you go with the simplified three balls or the Wikipedia 60 balls, it really is very important that you make clear that the balls are being drawn from the same urn. "Now I give you another choice between two lotteries" sounds like you're talking about a different set of balls, urns, etc. (I know that a "lottery" can mean the rules of the game, but I think colloquially most of us would think of those as two "bets" not two "lotteries.") When I read your problem, I had no idea that the two lotteries (err, I guess you'd say, "two drawings" in your terminology?) were linked. By leaving that out, you basically made it impossible for readers to figure out the point of the post unless they were familiar with Ellsberg ex ante or went to the Wikipedia page. I would strongly suggest a clarifying edit, unless the point of the post is to leave readers confused until they read your explanation. 10.11.2006 11:33am (link) Bill Sommerfeld (www): I haven't seen an analysis of the cost of the thought processes required to develop certainty that a novel lottery B *really is* indistinguishable from a novel lottery A? Perhaps one way to model this is to have a second utility-of-uncertainty function, which is a function of the amount of unknown information in the situation which will affect the outcome. In lottery "A", the colors of the non-red balls do not matter; there are three possible outcomes to evaluate (red, non-red, non-red) In lottery "B", the colors of the non-red balls *do* matter, and there are four times as many possible outcomes to evaluate; this makes it a (computationally) more expensive bet. The greater the amount of uncertainty in a situation, the more likely it is that we will make a mistake evaluating it, and thus we have a bias towards simpler circumstances. 10.11.2006 11:37am (link) Sarah (mail) (www): As the first person to comment in the previous thread, let me indulge in a small bit of triumphalism for having openly admitted that my reasoning was, basically, that "not knowing for sure whether there weren't any white balls would drive me nuts." Yay me! The only part of this that's interesting to me is that it would drive me nuts despite the fact that I can calculate (or in this case, mess up but have someone correct me on) the actual odds, and realize that they're still the same anyway. But considering I make all kinds of allowances for "things that would drive me crazy," including obviously obviously less than useful things, like getting to class so early I have to wait for the other class to get out before I can get a seat (the fear of being late being great enough to make me be irrationally early, wasting study time and so forth,) I'm forced to conclude that a lot of irrational things threaten to make me nuts, and I'm basically okay with altering my behavior to accomodate that. (the question is, are economists okay with a behavior theory that doesn't really model basic decision making in this situation, for which it seems on its face to be ideally suited...) 10.11.2006 11:37am (link) Sasha Volokh (mail) (www): I've posted an update. 10.11.2006 11:39am (link) plunge (mail): Tennessean, the issue in this case is that there is some process for determining the number of what sort of balls are there, but we don't know it. There are mechanisms in the world which can give us "random-enough" choices to determine the balls.. but we don't know for sure if they were used. Hence, we cannot say anything about the probability that a given ball pulled from the sack will be black. What I think most people confuse this with is guessing at random. If we don't know the probability of the ball in the bad being black, and it could either black or white, we can still get to a "50% chance of getting it right" by flipping a coin to pick white or black. That coin flip will be correct 50% of the time no matter what the probability of the ball being black is. That result is, of course, completely trivial, and that's why it's important to distinguish "I have a 50/50 shot of blindly guessing the right answer" from "there is a 50/50 chance of it being this particular answer." You often run into this issue with creationists making arguments about physical events in which they are unsure of the probability. They work this by creating binary options and then deciding that the probability of either one happening is 50/50. This, of course, is complete nonsense, especially in cases where the probability of certain events having happened is already known to be 1. :) 10.11.2006 11:42am (link) Oren Elrad (mail): This is an excellent application to Laplace's principle of insufficient reason. Without any more information (implicit or explicit) we ought to assume that all the possibilities (WW, WB, BW and BB) are equally likely. That gives results UA = 1/3 UB = (1/4)*(2/3) + (1/2)(1/3)+(1/4)(0) = 1/3 UC = 2/3 UD = (1/4)(1/3) + 1/2 (2/3) + 1/4 (2/3) = 7/12 Of course, if you apply Laplace's principle of insufficient reason differently and Assume the WW,BW and BB are equally likely with (1/3) apeice, then come up with UD = 1/3(1/3) + 1/3(2/3) + 1/3(1) = 2/3 As my statistics prof once remarked, Laplace's principle of insufficient reason is insufficient. 10.11.2006 11:49am (link) plunge (mail): "FantasiaWHT: I vote for illustrating the uselessness of utility theory :P As I said before, aside from the things it ignores that are listed above, it also ignores the fact that not everyone can calculate that the odds are the same, heh." Again, I think a really important insight here is that we can't know anything about the probability of black vs. white. However, the key is that that doesn't mean we can't know anything about the probible outcome of betting on black and then betting on white! We may not have enough information to judge the expected outcomes of the B and D options of the individual trials, but we DO have enough information to judge the expected outcomes of the patterns of trials when they are linked! 10.11.2006 11:50am (link) AnonPerson (mail): Ambiguity aversion seems to be just another case of risk aversion. Obviously, if the probability distribution is unknown, using the exact same mathematical model is not possible. But I think that when you look at what is going on inside people's heads, its the same thinking: Choice A has a certain outcome. Choice B has an uncertain outcome. In ambiguity aversion, it's just really, really uncertain. :-) You could, if you wish, simply add another layer to your model, and turn it back into risk aversion. Simply assume that people form some space of possible probabilities in their heads, and then implicitly create some probability function on that space. 10.11.2006 11:50am (link) Tennessean (mail): Plunge: I'm not sure I understand. Wouldn't it follow also that "the issue in this case is that there is some process for determining [exactly which color ball will be drawn on this particular draw], but we don't know it"? Regardless of the set up, I am going to draw a certain, fixed color ball on any one particular draw. Hence, as I wrote earlier, the "probability" in the most informed sense is either 0% or 100% of drawing a red ball, and in no way is the informed probability 50%. We can get to 50% only by restraining probability judgments to the world of incomplete information and acknowledging that, when talking probability, everything unspoken is considered irrelevant, no? (This last sentence is the key point, it seems to me.) So, if we really have no information on which to judge the Creationist's argument other than the listing of possibilities, I think the Creationist is right to say that the one is as likely as any other. However, if we can start discussing the various possibilities in terms of "evidence", those probabilities change. 10.11.2006 11:55am (link) Sasha Volokh (mail) (www): Oren: I believe your math is off: UD = (1/4)(1/3) + 1/2 (2/3) + 1/4 (2/3) = 7/12 should be UD = (1/4)(1/3) + 1/2 (2/3) + 1/4 (3/3) = 2/3 10.11.2006 12:01pm (link) lucia (mail) (www): After answering on the last post, and saying I'd pick based on lower standard deviations, I did what I should have done: I bothered to actually calculate the standard deviations. If I didn't make a mistake, the standard deviations for A equals that for B. The SD for C=SC for D! And the equality holds for all higher order moments too. (Wow!) So, assuming I didn't screw up the calculation, I can't pick based on expected value, standard deviation or higher order moments. Given this, I look at something else: The possibility of cheating by my "opponent". If I pick A over B and C over D, the house can't cheat or favor. In real gambling, if you let one side cheat, those that do cheat are likely to cheat in their favor not mine. That's why you have one person shuffle and another person cut the cards when playing poker. (This might all change for tv game shows if I suspect Monte opens the first door and is biased in some way -- possibly because concealing the best prize is good for ratings.) 10.11.2006 12:12pm (link) plunge (mail): "Wouldn't it follow also that "the issue in this case is that there is some process for determining [exactly which color ball will be drawn on this particular draw], but we don't know it"?" Indeed. That process could be completely random. Or it could be completely determined by something. We don't know. "Regardless of the set up, I am going to draw a certain, fixed color ball on any one particular draw. Hence, as I wrote earlier, the "probability" in the most informed sense is either 0% or 100% of drawing a red ball and in no way is the informed probability 50%. We can get to 50% only by restraining probability judgments to the world of incomplete information and acknowledging that, when talking probability, everything unspoken is considered irrelevant, no?" Right, but don't confuse the drawing of the ball with the selection of the balls! They are two different events which we can generally describe in two different ways. In the case of picking the balls, we might be saying that the picking is strictly and deliberately chosen (everytime we get the chance to pick, they pick black) while the situation with drawing out a ball might be random in the sense that our proceedure ends up with statistically random choices of balls over time. "So, if we really have no information on which to judge the Creationist's argument other than the listing of possibilities, I think the Creationist is right to say that the one is as likely as any other. However, if we can start discussing the various possibilities in terms of "evidence", those probabilities change." No, again, not in the way they are being used. In the creationist sense, the scheme is to break some sequence of events down into lots of binary branches with unknown probabilities and then keep adding together probabilities as if they were all indepedent. The end result is some astronomically unlikely value for some particular physical event. The problem is that first of all if we don't know that the probabilities are indepedent of each other, then we can't treat them as such: doing the math properly requires finding that out. Second of all, just because there are only two possiblities (one thing happens, or it doesn't) doesn't mean that the only options for the probability of the event are 0% and 100%, and in the abscence of evidence we split the difference. When we speak of the probability of a binary event, we generally mean statistically, not fully deterministically (if everything is deterministic then the probability of EVERYTHING is either 1 or 0, never 50%). It's either going to rain today or it isn't, but we have a 33% chance of rain. So we could be ANY percentage value of likihood on a scale. Why have your default assumption be set to the midway point as opposed to anywhere else on the scale? Why have it set anywhere in the abscence of information? Where does the math come from? The answer is that we don't know what sort of process we're dealing with, and we haven't averaged it out by testing it: so there is no math to do, neither statistically nor deterministically. And trying to pretend we have information, let alone a specific value that we are going to use in an straight out probability equation to calculate a larger probability is especially deceptive! We're taking a number we just made up with a arbitrary value and plugging it into an equation as if it were a real calculate probability. Garbage in, garbage out. 10.11.2006 12:18pm (link) MartinM: Of course, if you apply Laplace's principle of insufficient reason differently and Assume the WW,BW and BB are equally likely with (1/3) apeice... Then you'd be applying it incorrectly. then come up with UD = 1/3(1/3) + 1/3(2/3) + 1/3(1) = 2/3 In which case you'd be applying your incorrect reading of Laplace incorrectly, too. Were WW, BW and BB equally likely, they'd have probability 1/3*2/3 = 2/9 apiece, not 1/3. There's still one red ball in there. 10.11.2006 12:21pm (link) plunge (mail): Again, while the "what is the probability: unknown or 50,50" discussion is interesting, I keep having to insist that it's irrelevant to this problem. It makes no difference at all what probability you believe the undetermined factor to be. The only piece of information relevant is that the same balls are being used in both choices. If that one element is true, then strongly favoring the pattern of A/C is indeed a paradox from the perspective of Expected Value. Lots of people still seem to be trying to calculate expected value for each one of the four choices individually. I maintain that that is simply impossible to do, but even if I'm wrong, that's irrelevant, because that's the wrong calculation in any case. 10.11.2006 12:22pm (link) MartinM: The possibility of cheating by my "opponent" Surely that's handled by the statement that the same set of balls is to be used in all lotteries? Sure, the house could lie about that, but that applies equally to A and C. 10.11.2006 12:24pm (link) byomtov (mail): This is correct with regard to risk aversion over final outcomes, but I believe there are other sorts of "aversion" that have been suggested as explanations for this. One is "regret aversion." You don't want to pick B, say, and then discover that there was no white ball and you never had a chance. It is much more acceptable to pick A, and then shrug off a loss with the idea that you only had a 1/3 chance anyway. Even if it turns out that both other balls were white, you probably would not feel too bad, because you still had a 1/3 chance. 10.11.2006 12:27pm (link) plunge (mail): Tennessean, put it another way: let's say I devise a very long and involved process of mixed probabilities that will determine whether or not a green light turns on or not. It does turn on. Can you judge how probable this occurance was? How would you go about doing that, without examining the process itself? Say then that we could wager over whether or not it has a probability of 50%. If you complained that you had no information and flipped a coin to determine what side you'd bet, you'd have a 50% chance of winning the bet. But... we could also do a similar bet over whether or not it's 33% likely or not. Again, if you flipped a coin, you'd get a 50% chance of being correct. All the 50% is telling you is that you have a 50% chance of randomly guessing whether some percentage is the correct probability, which is just a trivial way of saying that there are two options. It isn't telling us anything about the probability of them happening, just that randomly guessing ANY given figure for the probability will be right half of the time. 10.11.2006 12:38pm (link) plunge (mail): byomtov:"One is "regret aversion." You don't want to pick B, say, and then discover that there was no white ball and you never had a chance." This is a perfectly sound rationale that holds up if and only if there is only one round. But in the question asked, there are two rounds, and if you are averse to regret then you have the perfect solution to your problem! 10.11.2006 12:40pm (link) guest: There is a paradox here: why have I read, and now posted, about something that I am very indifferent about, but also consider to be sort of a stupid pseudo-paradox. Isn't there some Tom Brady question that the VC can post about? 10.11.2006 12:40pm (link) plunge (mail): Sorry, I didn't phrase this correctly: "It isn't telling us anything about the probability of them happening, just that randomly guessing ANY given figure for the probability will be right half of the time." That should read: "It isn't telling us anything about the probability of them happening, just that if there is a bet on the correctness ANY given figure for the possible probability, you have a 50% chance of winning the bet if you pick randomly." 10.11.2006 12:43pm (link) KevinM: We've omitted another potentially dispositive issue: What is your favorite color? (And if not red, do you nevertheless prefer red to uncertainty?) 10.11.2006 12:45pm (link) Siona Sthrunch (mail): As I patiently explained on the first thread about this, there is no paradox here because it is not paradoxical for the offeror's act of proposing the second lottery to change the chooser's degree of belief as to the actual colors of the balls. That the balls are the same in the two lotteries is irrelevant: the information available to the chooser in the second lottery is different, substantively, than the chooser in the first lottery. The chooser in the first lottery does not know that the offeror will offer the second lottery; the chooser in the second lottery does. This new information can change the prior probabilities from the chooser's perspective. In conclusion: no paradox. Here is a very similar example. Suppose offeror shows chooser a box with either a gold bar or lump of goal concealed inside. Offeror offers to sell the box to chooser for \$50,000. Chooser forms some probability distribution that box has bar or coal, say P(bar)=p and P(lump)=q. Now offeror offers to sell the same box to the same chooser for a quarter. Based on this offer, chooser can, without paradox, update his belief in the probabilities that a bar or lump will be in the box. This updating might in turn lead to a different decision under utility theory, but that is not necessary for this particular example. In conclusion, you have failed to consider my analysi in the former thread. 10.11.2006 12:53pm (link) Rich B. (mail): "My daddy told me once, if a man comes up to you with a brand new unopened deck of cards and says 'I'll bet you \$20 I can make the Jack of spades jump out of this deck and spit water in your ear'... do not bet this man. For as soon as you do, you will be \$20 poorer and have water in your ear." --Sky Masterson, Guys and Dolls Choices B and D give you a higher probability of having water in your ear. 10.11.2006 12:55pm (link) plunge (mail): "As I patiently explained on the first thread about this, there is no paradox here because it is not paradoxical for the offeror's act of proposing the second lottery to change the chooser's degree of belief as to the actual colors of the balls. " That's not part of the question and its anyways irrelevant. It doesn't matter what the probability of the balls turns out to be or what you suspect it to be based on whatever information you convince youself you have. People try this same out with the Monty Hall dilemna: trying to pretend that the situation is presented as unfolding with uncertainty about what will happen. That's not the case. Monty ALWAYS opens a door with a goat on the other side, no matter what you pick. You ALWAYS get to express your preferences in both lotteries, and we don't even have to consider actually holding the lottery at all to see the paradox in most people's choices. 10.11.2006 1:00pm (link) Sasha Volokh (mail) (www): Guest: Who's Tom Brady? 10.11.2006 1:08pm (link) notamathematician: "Rather, we're talking about the probabilities of what the probabilities are. This is called ambiguity aversion. Ambiguity aversion plays no role in expected utility theory, where only the ultimate probabilities (and the utility of the outcomes, which I've held constant here) count." Excellent!! I was bored until I read that. There was a point after all! 10.11.2006 1:22pm (link) guest: Tom Brady is the quarterback for the New England Patriots. Someone (Kopel or Lindgren?) posted a link to a fantasy question about Brady a few weeks back. 10.11.2006 1:25pm (link) Jake (Guest): Could we summarize this as follows? Question 1: is X > Y Question 2: is not-X > not-Y If your answer to the two questions is the same, you have a problem. I'm not clear on how this is analytically distinct from risk aversion. People would rather buy a ticket whose expected value is known than a ticket whose expected value varies within some set range. 10.11.2006 1:27pm (link) JerryW (mail): Andrew Edwards wrote: He surveyed people about the gender of children. Essentially he gave them a couple who'd had a boy, then a girl, then a boy. He asked people to say whether they couple's next child would be a boy or a girl. Everyone knows that it's random. But when he forced people to pick one or the other, like 90% of them chose "girl", because of the heuristic of pattern repitition. As an aside and of no relavence to the paradox there are 108 male babies born for every 100 female babies. So that is the way to bet. 10.11.2006 1:33pm (link) Daniel K: There is something here that either I or others are missing. According to the setup as I read it, I have to choose between A and B before I even know that there will be a second lottery. In that case, A and C make perfect sense, if we DON'T assume that the probability of black vs white is exactly 50%. If we assume that the distribution of black vs white is determined in a completely unkknown way, then there is at least some possibility that the lottery will be stacked against the player. Remember, when I make the A vs B decision, I have no idea that there will be second lottery. Of course, there is also some possibility that the lottery will be stacked in favor of the player, but I think assuming that the house will prefer not to give away money if given the choice is a good one. (Of course, it may not have the choice--we have NO idea how the balls were selected). In this case, Option A makes more sense, since the house cannot cheat. Now I am offered the second choice, having already selected A for my first choice. (I am assuming that you are waiting till the end to actually run the lotteries--if I know what I drew the first time, that throws everything off.) Well, my reasoning remains as above. It is possible that the house has stacked things in its favor, and less likely that it has stacked things against it. Moreover, I selected A the first time, and I suspect that the house knows that that was likely. Then the most logical thing for me to do is select C. Why? Because the house, knowing that most people would (logically) pick A the first time, might stack the lottery with white balls. While I didn't know there would be two lotteries, the house did, and I have to assume there is some probability that they acted accordingly. In that case, picking C means that I can't be cheated by the house. The situation changes entirely if I answer it knowing that I will be participating in both lotteries. In that case, as has been pointed out, there are four possibilities. RWW, RWB, RBW, and RBB. In the first option, picking B &D would be In the first three options, picking A &C or B &D are equivalent: one lottery with a 1/3 chance of winning, and one with a 2/3. However, in the fourth option, You are guaranteed to win 1 and lose one lottery. For this reason, with full knowledge of both lottery setups, I would choose B &D. (I'll deal with A &B or C &D later). But, you tell me, the expected values are the same in all cases. But, by picking B and D, I increase the chance that I will win one lottery, at the expense of reducing the chance that I will win two lotteries. I like to win. Winning twice is better than winning once, but, for me, losing twice would be a let-down. therefore, I've chosen the options that reduce my chances of losing twice. Note that this reasoning applies whatever the house does--they can't cheat me. At worst, they can avoid the RBB mix, and my probabilities become the same as if I had picked A and C. Choosing A and B or C and D gives significantly different probabilities depending on the distribution of balls. I consider choosing either of those combinations to be riskier, since there is the possibility that the house might out-think me. I've avoided them. 10.11.2006 1:37pm (link) Doug Sundseth (mail): Siona is correct. In the first lottery, we are offered two choices. There is no reason to believe that the offeror of the lottery is not a utility maximizer. It is a reasonable assumption that minimizing the amount of money maximizes the utility of the offeror. Thus, choosing A is rational, since this is a zero sum game in which any utility increase for the offerer is a utility decrease for me. (Once the offer is made, the \$1 a sunk cost.) (We will neglect that the first lottery would provide information about the contents of the bag at this point, because that changes the analysis to be fundamentally less interesting.) When the second lottery is offered, the information space changes. I can still reasonably assume that the offeror is attempting to maximize utility, but the question of how to do that has become more complex. I can further assume that the offeror is more experienced in understanding maneuvers in this complex information space, since he clearly has some motive for these strange offers. Since the offeror thus has a better understanding of the space, I should choose to reduce ambiguity to reduce the advantage of understanding possessed by the offeror. Note that this is pretty directly analogous to suggestions that naive or uninvolved investors buy mutual funds rather than individual stocks. Such an investor is at an information disadvantage to the serious market player, so reducing ambiguity is entirely rational. 10.11.2006 1:44pm (link) SeaDrive (mail): Also as a matter of no relevence to the paradox, an individual man may have a higher probability of begetting males than females, or vice versa. A Bayesian will expect another boy. 10.11.2006 1:45pm (link) Rue Des Quatre Vents (mail): I wonder how ambiguity aversion plays out in Rawls' veil of ignorance. Harsanyi and other utilitarians asserted that we have no reason to think that we have an equal probability of being any person, once the veil is lifted. Rawls, because his "difference principle" hung in the balance, claimed otherwise. 10.11.2006 2:20pm (link) Graham Simms: The Ellsberg Paradox is not a paradox if the expected utility function is inversely related to the standard deviation of the expected probability distribution. Thus the expected utility is higher the lower the standard deviation of the expected probability distribution. 10.11.2006 2:24pm (link) roy (mail) (www): Utility theory is a tricky tool anyway. It only works if the value of the prizes are calculated right. That's a darned hard restriction to meet, even with money, because marginal value changes. \$0.50 and \$1 are so close to zero that my point won't make sense, but consider \$500M and \$1B. Is \$1B really worth exactly twice \$500M? Not to me. Getting \$500M would mean I quit my job, move to paradise and never worry about money again. I'll have a life of luxurious leisure, and only hard work when I feel like it. Getting another \$500M would be cool, but not as big a transformation. I can't quit my job again, I can't move to paradise again, and I can't worry about money any less. I could get a bigger estate, but I can only be in one room at a time. I could get a faster private jet, but there's only so many hours in a day to save by going faster. So \$1B is worth less than twice as much as \$500M is, not in financial terms (obviously) but in terms of how freaking awesome it would be. Choosing a guaranteed \$500M over a 50% chance at \$1B would be entirely rational, not some psychological quirk. 10.11.2006 2:29pm (link) lucia (mail) (www): MartinM: Surely that's handled by the statement that the same set of balls is to be used in all lotteries? Sure, the house could lie about that, but that applies equally to A and C. I don't think the statement that the same balls are used automatically prevents the house from having a preferred cheating strategy. Say the house "cheats" by being biased in their favor: Case 1: They pick RBB. My choices: A: I have a 33% change of winning. (One red/ 3) B: I have a 0% percent chance of winning. (0 white/ 3) C: I have 67% chance of winning. (2 not red /3) D: I have a 100% chance of winning. (3 not white out of 3) Case 2: They pick RWW A: I have a 33% chance of winning. (One red/ 3) B: I have a 67% chance of winning. (Two white/ 3) C: I have a 67% chance of winning. (2 not red/ 3) D: I have a 0% chance of winning.) (0 not red/ 3) I'm not sure about the time sequence of this game and the sequence that information is revealed-- but I'm assuming I don't know what ball I drew in game a/b. Anyway, if I'm told I get to play both games and the balls will be the same in both, and I'm required to pick my strategy for both before drawing, I pick A/C over A/D. (Assuming this is real gambling.) Why do I pick this? Because to first order, my opponents strategy for cheating is to favor RWW's over RBB's. RWW is better for them if I flip a coin to pick. RWW is better if I systematically pick A/D rather than A/C. RWW is no worse for them if I pick A/C. The only way they'd be worse off always picking RWW is if my strategy is to always pick B/C. To pick that consistently, I'm gambling that they do pick RWW vs. RBB to "bias" in their favor should people pick based on coin flips or based on first order reasoning. However, at this point, if I am certain they "cheat" and somehow know my reasoning, the whole game becomes more complicated because now they know I pick B/C because they know I think they cheat. So, they counter cheat! Once everyone is thinking of the biases in picking white vs. black whole games gets too complicated to analyze in the time frame of gambling, and I'm better off just picking A/C. (Of course, I'll likely have overlooked something. But, I am going for A/C right now!) 10.11.2006 2:33pm (link) Sasha Volokh (mail) (www): Well, ask one of these questions and you find out people have all sorts of concerns you didn't want them to have. I'm not offering any lotteries. I don't have any prizes. I didn't set up the balls. All I'm asking is your preference over hypothetical lotteries. Even if I were offering the lotteries, it's not clear what my motivation is. I didn't say I was going to let you play the lottery of your choice for free. Maybe this is just a set of exploratory questions to figure out how much to charge for the lottery tickets. Jake (guest): Here's how it differs from risk aversion. I didn't give any probability distribution for white vs. black, but let me add some facts and come out and tell you that each ball is white or black with 50-50 probability. (I'll ignore the whole debate about whether it's reasonable to assume 50-50 even without this fact.) If it's 50-50, then the probability of white is actually 1/3. A and B (same with C and D) actually have the same expected value, the same variance, the same everything. They're identical in all respects that are relevant to expected utility theory. (Expected utility theory, as I said, simply excludes everything that's not the probability and the utility of the outcome.) Even then -- even when I've explicitly made the white vs. black probability 50-50 -- most people will still choose A and C. Risk aversion can't account for that, because A and B are both "prize with 1/3 probability" lotteries, and C and D are both "prize with 2/3 probability" lotteries. The reason they do so is that I've expressed the red probability as exactly 1/3, while I've expressed the white probability as being 0 with probability 1/4, 1/3 with probability 1/2, and 2/3 with probability 1/4. This is just a sort of framing effect. 10.11.2006 2:35pm (link) notamathematician: good point Graham Simms! 10.11.2006 2:40pm (link) Daniel K: Well, then, let me ask explicitly: Should we assume that we know we will be offered the C vs D choices after the A vs B? Or do we have to answer A vs B with no idea whether there will be another lottery? 10.11.2006 2:50pm (link) Sasha Volokh (mail) (www): How about if I offer you everything simultaneously, and only ask your preference on A vs. B and on C vs. D. But provided I'm not the person running the show, and I'm only asking you questions -- and not letting you actually play or get anything -- I'm not sure how that matters. 10.11.2006 2:57pm (link) Duffy Pratt (mail): Last I checked, a paradox is a statement or set of statements which are false, if true; and true, if false. I can't figure out why anyone would think this is a paradox. All it means is that people have preferences that are based on factors other than utility theory. Well, duh! A paradox is like the following. A lawyer agrees to train others to practice law, and will take his fee when the student wins his first case. A student goes through the program and then never practices. The lawyer is upset that he hasn't gotten his teaching fee, so he sues the student. He argues that if the student loses, he should get the fee. And that if the student wins, he will have won his first case and he should still get the fee. The student argues that if he wins, he won't have to pay, and that if he loses he still will not have won his first case, so he should not have to pay. What outcome? Anyone care to show how anything in the Ellsberg Paradox actually is a paradox. I'm afraid that this paradox is to paradoxes what the social sciences are to science. 10.11.2006 2:58pm (link) Sasha Volokh (mail) (www): Duffy: As I said in my original post, this isn't really a paradox at all. It's just an illustration of an interesting aspect of how people make choices. In this way it's similar to other expected utility "paradoxes" like the Allais Paradox or the St. Petersburg Paradox. It's only called a paradox colloquially. 10.11.2006 3:00pm (link) IB Bill (mail) (www): Sasha: Your second update reminded me of something Samuel Beckett once reportedly said. Asked "Who is Godot?" he said, "I don't know. He's not in the play." 10.11.2006 3:05pm (link) Sasha Volokh (mail) (www): Indeed. 10.11.2006 3:06pm (link) roy (mail) (www): The discussion might go smoother if we just renamed the "Ellsberg Paradox" to the "Ellsberg Kookiness". 10.11.2006 3:13pm (link) Sasha Volokh (mail) (www): Zanity. 10.11.2006 3:17pm (link) Stephen C. Carlson (www): They're identical in all respects that are relevant to expected utility theory. (Expected utility theory, as I said, simply excludes everything that's not the probability and the utility of the outcome.) It looks like expected utility theory is not very good at predicting how people decide when they reason that the probability and the utility of the outcome are the same. 10.11.2006 3:46pm (link) lucia (mail) (www): Sasha: No actual lottery?! Dang! As to the 50%-50% probability issue: I thought we were supposed to consider the possibility the probability of white vs black wasn't 50%-50%, but just didn't know how the probabilities might be set? But yes-- in the case where we state the probability of black vs white is 50%-50%, which I'll call a "fair game", I still pick A/C. This is not based on utility theory which says A and B are equally good and C and D are equally tool. It's not based on risk aversion because -- unless I'm mistaken, in a "fair game" the standard deviation of the prizes and all higher moments are the same. So, the risk is the absolutely positively the same. So, the question becomes why do I, and evidently many others, pick A/C even after we figure out it has no advantage based on utility or risk aversion? All I can think is that, deep down, I, and others, don't actually believe there can't be any "cheating" by the other side. Or maybe we like to behave in ways that make others know they will need to think a while before "cheating" us. In life that has utility, even if it means nothing interms of portfolio theory! 10.11.2006 3:57pm (link) Attorney SF (mail): Sasha, Ben from IHS Warmonger -- I can't believe you were stumped by the Tom Brady question. I should have stuck around for Trivial Pursuit. On a slightly more relevant note, I'm always surprised by how many people struggle to grasp the basic point being made by abstract questions and latch on to -- seemingly -- irrelevant issues (e.g., "who is running the lottery?") Methinks this is somewhat related to the claims of "cultural bias" in standardized testing -- your thoughts? And feel free to respond somewhere on my blog: the-parallax-view.blogspot.com 10.11.2006 3:59pm (link) Daniel K: "How about if I offer you everything simultaneously, and only ask your preference on A vs. B and on C vs. D. " In that case, for the reasoning I posted above, I definitely chose B and D. The reason is that in three of the cases, I can expect identical results: a 1/3 lottery, and a 2/3 lottery, and for the fourth case, I can be assured of winning one and losing one. Choosing A and C give me four identical options, a 1/3 lottery and a 2/3 lottery. By choosing B and D, I minimize the probability that I will win nothing, while not changing my average expected winnings. Choosing A and D or B and D would, I believe, also increase the possibility that I would win both or neither lottery, while not changing the expected winning, assuming even probabilities on black and white balls. 10.11.2006 4:02pm (link) Hoya: With respect to the Rawls stuff posted earlier, it switches Harsanyi's and Rawls's takes on the matter. So, I'm puzzled. If I am told that a ball in a box could be white or black, I am supposed to assigned a 50% probability to its being white and 50% to its being black? Otherwise, you can't calculate the probabilities in this case. But I don't see what the warrant is for assigning those probabilities. Without a warrant for assigning the probabilities there, I can see why one would go with what one does have knowledge of: that there is a 1/3 chance that the red ball will be chosen. The chance of getting a white ball is inscrutable, as is the chance of getting the black ball. 10.11.2006 4:05pm (link) lucia (mail) (www): I'm always surprised by how many people struggle to grasp the basic point being made by abstract questions and latch on to -- seemingly -- irrelevant issues (e.g., "who is running the lottery?") Out of curiosity, was did anyone actually think anyone was running an actual lottery? I'm not convinced many failed to recognize this thought experiment was used to discuss an abstract point! 10.11.2006 4:54pm (link) Duffy Pratt (mail): OK, so no-one is claiming its a paradox. It's just called a paradox. Now, explain to me why its the least bit interesting. Let me give you another lottery choice. If heads comes up you win, if tails comes up you lose. In lottery A, you use a dime. In lottery B, you use a quarter. Which do you prefer? I'm gonna call this one the Duffy non-Paradox. I bet you some people have preferences about dimes or quarters when flipping a coin. Why they prefer the one over the other will have nothing to do with their chance of winning. It may even be that a majority of people prefer to flip quarters. But you have to say more than this before you could convince me that its interesting. 10.11.2006 5:55pm (link) plunge (mail): "The chance of getting a white ball is inscrutable, as is the chance of getting the black ball." But, and this is important, while you can't know the probability of black and white, you DO know that the probabilities are direct inverses of each other. In short, you can be VERY certain what the combined average probible payout of the two different B and D trials will be. 10.11.2006 6:41pm (link) Jim Carlile (mail): Maybe 'paradox-ical" is the best description of the selection processes-- and explanations-- that attacked this question. I think that was the natural point of it all. And it was fun. 10.16.2006 7:48am `pageok` `pageok`