There are three balls. One is red. Each of the others is either white or black. Now I give you a choice between two lotteries.

Lottery A: You win a prize if we draw a red ball.

Lottery B: You win a prize if we draw a white ball.

Which lottery do you choose?

Now I give you another choice between two lotteries.

Lottery C: You win a prize if we draw a ball that's not red.

Lottery D: You win a prize if we draw a ball that's not white.

Which lottery do you choose?

Post your answers, plus any reasoning, in the comments. If you're already familiar with the Ellsberg paradox, you can just watch. Explanations to come later.

UPDATE: Glad this is getting so many comments. Just a few comments of my own:

(1) Many people are assuming that each of the two balls is white or black with a 50-50 probability. Maybe, maybe not. Just keep in mind that it's not part of the assumptions.

(2) Just in case you reject the problem because you don't know the probabilities of white vs. black (though you shouldn't), you can answer the question assuming there's a 50-50 probability. Then, just for fun, answer the problem again where white has a 49% chance.

(3) Also, some people are wondering about the motivations of the "house," i.e., whether it wants you to win or lose. Think what you like about the motivations of the house, but keep in mind that the colors of the balls (however determined) are the same in Part 1 and Part 2.

(4) Some of you are wondering what's the "paradox." I'll explain soon (or you can just look it up on Wikipedia). It may not be right to call it a paradox; perhaps it's just an illustration of an interesting aspect of how people make choices.

And I'd choose lottery C, since there are always two balls that aren't red, and thus constant 2/3 odds. I might have a shot at 3/3 odds in Lottery D, if both non-red balls turn out to be black, but again, the potential of 0/3 odds would drive me nuts, and it seems like I'd still have an average of 1/3 odds, with the white/black option on the other two balls.

(I'm sure I've missed something, probably something huge, but since it's given me an excuse to not do my Russian homework for a few minutes, I don't care.)

Are we assuming that only one ball is drawn?

I choose lottery A. A gives me a 1:3 chance of winning.

B could either give me a 0:3, 1:3, or 2:3 chance of winning, because those last two balls could be BB, BW/WB, or WW, respectively

I suppose technically overall the odds are the same for each lottery, but I'd rather have a sure chance.

In the second I'd choose lottery D. D would either give me 1:3, 2:3 or 3:3, if the other two balls are WW, WB/BW, or BB, respectively.

C gives me a 2:3 chance of winning.

Second question: C. Again, the chances of a non-red ball are 2/3, where if the two other balls are white, there's only a 1/3 chance of drawing non-white. Obivously, those odds change if one of the other balls or both is white, but we don't know from the hypo.

Silly, I suppose.

But... I'm sure we're missing something, or this wouldn't be up here :) Trying to think what it could be... what's the paradox?

Lottery B: 2/3 * 1/2 = 1/3

Lottery C: 2/3

Lottery D:1/3 + 1/2 = 3/6 + 1/6 = 2/3

The assumption here, of course, is that white and black have a random, 50% chance of occurance. Assuming that the lottery is ran by someone who wants others to lose, always go with the red ball.

Otherwise I think there is a strong reason to believe that given the choice between lotteries A and B, the person running the game doesn't actually have ANY white balls in the mix. What motivation would there be to include those white balls? So when you include the motivation of the game controller, the paradox doesn't seem like much of a paradox at all.

C on the second one, but again it doesn't seem to matter. D is a sure winner if the other balls are black and a 1/3 winner if the other balls are red. C is 2/3 both times. On average it'd seem there's a 2/3 chance of winning either lottery.

Of course I'm not particularly good at statistics so the above might be completely off.

I would pick Lottery A because there's a 1/3 chance that I'll get something, while with Lottery B it could be 2/3 - but it could also be 1/3 (no difference) or 0/3 (far worse). Lottery A gives me a consistent chance do get

something.I would choose Lottery C for effectively the same reason. I've got a 2/3 chance with Lottery C, and while I might get a 3/3 chance with Lottery D, I might also get 2/3 (no difference) or a 1/3 (worse).

Color me risk averse. :)

RWW

RWB

RBW

RBB

The probability of drawing any colour is 4/12. That means there is no difference between lotteries A and B, or between lotteries C and D.

How is it a paradox, though?

In the long run with random mixes (equal chances) of black and white balls, both A &B and also C &D should have equal returns.

What some of you seem to be making a mistake on, though (correct me if I'm wrong), is that it is somehow a "sure thing" or a "safe bet" to take Lottery A over Lottery B, and Lottery C over Lottery D. The odds for Lottery A are 1/3, and the odds of Lottery B are ALSO 1/3. The odds of Lottery C are 2/3, and the odds of Lottery D are ALSO 2/3. It does not matter in Lotteries B and D that you can determine three different scenarios of ball combinations: the odds of winning these lotters is the exact same as A and C.

But, since we don't know the distribution of black and white balls, we can't estimate the probability at all. So, I don't see any good reason to choose B or D when you can choose A and C and know what your chances are. So, I don't think it's inconsistent; you can't really have an estimate. You just have an unknown in each case. Why not avoid it when it doesn't cost you?

Perhaps if the prizes for A and C were smaller than those for B and D it would show something interesting.

I once took an exam with a two part question, both of which were affected by a binary fact that I didn't know. I answered them inconsistently (knowing I'd get one part wrong) to avoid getting both parts wrong. I think that was rational.

Of course, my friend Haym faced the same situation, guessed, and got them both right.

PA=1/3

0 <= PB <= 2/3

PC=2/3

1/3 <= PD <= 1

with <PA>=<PB>=1/3 , <PC>=<PD>=2/3

I'll just point out that there are at least two pieces of information that would rationally affect my preference among scenarios with equal expected outcomes (which is the supposed paradox):

how do I perceive the prize,andhow do I perceive you(or whoever is running the lottery).1) Is it a prize I really,

reallywant? "Ambiguity aversion" describes it well: in the absence of further information, all expectation values are the same, but if I'm starving and the prize is a sandwich, I might prefer the less ambiguous course.2) Are you fair? Unfair? Generous? Stingy? Perverse? Do you know what balls are in the jar, or care whether I win or lose?

He flipped a coin.,

So I pick A, prize on a red ball, because red is pretty. And I pick D, prize on a non-white ball, because I'm afraid I'll get called a racist if I don't.

But, I would have a mild preference for A and a strong preference for C. I like the "sure thing" in A. And I don't like the possibility of 2 whites in D. Silly reasoning of course since it doesn't matter.

I'll assume that there's an equal chance of getting white balls and black balls in my lottery. Even if there is no guarantee that the white balls and black balls will be drawn randomly from a set containing an equal number of both, I figure there is still an even chance of white and black balls in my lottery so long as the relative percentages in the set from which they're drawn is left completely to chance.

I can't see a difference between lotteries A and B. If I take Lottery A, I have a 1/3 chance of winning. As for Lottery B, I have a 1/3 chance of winning if there is one of each, or a 2/3 chance of winning if there are two whites. Assuming the chances of two whites, two blacks, or one of each is equal, my chances of getting each combination are as follows: (a) I have a 1/3 chance of two whites, which means a 1/3 chance of a 2/3 chance of winning [1/3 chance of combination x 2/3 chance of winning = 2/9 chance of getting this combination and winning]; (b) a 1/3 chance of two blacks, which means a 1/3 chance of a 0/3 chance of winning [0/9]; and (c) a 1/3 chance of getting one of each, which combination yields a 1/3 chance of winning [1/9]. Combined [2/9 + 0/9 + 1/9], that's a 1/3 chance of winning Lottery B - same as Lottery A.

As for lotteries C and D, I again don't see a difference. With Lottery C, I clearly have a 2/3 chance of winning. In Lottery D, I have a 1/3 chance of two blacks [1/3 chance of a 3/3 chance = total 1/3 chance of that combination and winning], a 1/3 chance of two whites [1/3 chance of 1/3 chance = 1/9 chance], and a 1/3 chance of a white and a black [1/3 chance of a 2/3 chance = 2/9 chance of winning]. That's also a total 2/3 chance of winning for Lottery D.

SUMMARY

Lottery A: 1/3 chance

Lottery B: 1/3 chance

Lottery C: 2/3 chance

Lottery D: 2/3 chance

If they're all the same as I have concluded, I'd go with A and D for my own psychological comfort, because I know there is a red ball in there and definitely have a chance in both lotteries A and D regardless of the white/black combination.

I probably completely screwed this up, so I'll be interested to see the explanation.

I don't think we are supposed to assume that the game is run by a person with any motivation.

Not true. The odds of B are either 0, 1/3, or 2/3. In the absence of further information,

you do not know which.Now, it is true that ITAOFI the

averageof these odds is 1/3, but there could exist further info that would skew my perception, e.g.:- If I know that the house is a stingy jerk, then I might reason that the odds of winning B are zero.

- If I know that he's fair and disinterested, then I might reason that the odds of winning B are 1/3.

- If I know that he's a generous pal, then I might reason that the odds of winning B are 2/3.

(1) When the first choice of lotteries is offered, we don't know yet that there will be a second one. So we might say that the "house" could have stacked the pot with black balls -- better to take lottery (A). But when the second choice of lotteries is offered, we realize that if the house had stacked the pot with black balls, we would have a sure win now. So if the house were stacking the pot for the least payoff for the two choices, it probably should stack the pot with *white* balls, and hope we take lottery (A) on the first option!

(2) I haven't seen anyone mention that on the first draw, you *get to see a ball*. If the ball you draw is black, then lottery (D) will have odds of either 2/3 or 3/3, which is guaranteed to be as good as or better than (C). If you draw a white ball on the first draw, then lottery (D) will have odds of 1/3 or 2/3 -- so (C) is sure to be at least as good, if not better. (Drawing a red ball on the first draw doesn't help you at all in deciding what to do on the next lottery.)

I would choose B and D because I am a masochist, but I don't want to be accused of being a commie, so I pick B and C.

The rational person thus hears the question and -assumes- that the incomplete information is of the type that would screw him... i.e. the lotto operator can claim that he never SAID the white and black balls were present in equal proportions, or indeed that both would be present at all, so he was playing fair even though he misrepresented your chances of winning. Even if this isn't the case, if we weren't talking about a pure logic problem, it would be imperative to assume that some sort of misrepresentation was in fact occurring.

That's one reason that legal language is the way it is - if something is spelled out exactly, in terms both sides agree on, nobody comes back later and says "oh, we have all your money and we never actually said we'd do anything for you, now I'm going on vacation."

So thanks, I'll stick with my cynicism as such things are concerned. Safer!

You cannot derive knowledge from a lack of knowledge.

The actual odds of the balls is actually as irrelevant as the payoff to the question of paradox.

Proof:

A. P(R) = 1/3

B. P(W) = ?

C. P(~R) = 2/3

D. P(~W) = ?

Assume you think A gives you better odds than B.

Then you must think 1/3 > P(W).

Therefore P(~W) > 2/3.

Therefore P(~W) > P(~R).

Therefore D gives better odds than C.

Reverse the signs for the rest of the proof.

I don't think there's any logical paradox in the problem itself. The paradox (if you want to call it that) only emerges if there are people who choose inconsistently — e.g., choosing A and C. Even that's not really inconsistent if they assumed that P(W) = 1/3, which is neither supported nor contradicted by the instructions.

Instead, the paradox should be presented in some way that fixes the prior distributions. For example, lottery options 1 and 2 are each presented to the chooser, and a coin flip decides which option is actually offered.

Option A always gives you a 1:3 chance to win.

Otion B could give you a 0:3 (no white); 1:3 (one white; or 2:3 chance of winning (2 white). Assuming, ex ante that there is an equal opportunity for 0, 1 or 2 white balls, drawing one ball will give you a chance to win 3 of 9 times, or 1/3. I don't think there's a difference between A or B, so the coin flip is as good as any.

Option C always gives you a 2:3 chance to win.

Option D gives you a 1:3 chance if there are 2 white balls, a 2:3 chance if there is one white ball, and a 3:3 chance if there is no white ball present. Again, assuming ex ante there is an equal chance of all three outcomes, that's a 6:9 chance to win, which is equivalent to 2:3. You'd prefer the second set of lotteries to the first, on just a pure chance of winning basis, but I don't see a way to discern between the two.

I don't think the motivations of the house are relevant unless they're outright cheating (e.g. looking and drawing what they want to, or depositing balls after you make your selection). Anything short of that makes their motivations irrelevant because they can't change the probabilities once the balls are in the hat, as it were.

I'd guess people would be expected to choose A &C because of the concept of a sure thing (always a chance, and the same chance, to win), but I don't think it matters. Now I'll go read wikipedia to see what they have to say.

But you can't assume anything. There well could be a difference between A or B: you just don't know if there is or not, which is NOT the same thing as knowing that there is no difference. At best, the information you have suggests that you have no basis to judge the probability at all, not that you can judge it to be as likely one thing as the other.

As to being unable to judge the probability, I think I already did that, you can agree or not as you like. As a matter of pure probability, your chance of winning with a is 1 out of 3, and your chance of winning with be is an equal probability of 0, 1 and 2 out of three, which comes up to... 1 out of 3 (again, excepting cheating, which is not part of the problem as presented).

For the second lottery I again go with the red ball, this gives me a 2 in 3 chance of winning. This exactly matches what would be expected of a 50 50 split between white and black and again minimizes my risk of failure.

50/50 chance per ball independently determined gives 4 equal possibilities for the three balls.

1) Red White White

2) Red White Black

3) Red Black White

4) Red Black Black

Probability of drawing a white ball is thus for 50 50 independent determination (.25)(.666) + (.25)(.333) + (.25)(.333) + 0 = .333

So by focusing no the red ball I maintain the same odds as a 50 50 independent split, and take the house motives (which I always assume to be against me) out of play.

No, I'm saying you can't assume there isn't. That's not the same thing.

"Statistically, there isn't, unless the other side is cheating, which is not part of the problem. Arguing with the question doesn't get you anywhere."

I'm not arguing with the question. The question is as stated. And actually, the point of the question holds completely regardless of the probability of anything anyway.

"As to being unable to judge the probability, I think I already did that, you can agree or not as you like."

Given that I think you are wrong, I will disagree, ok? :)

"As a matter of pure probability, your chance of winning with a is 1 out of 3, and your chance of winning with be is an equal probability of 0, 1 and 2 out of three, which comes up to... 1 out of 3 (again, excepting cheating, which is not part of the problem as presented)."

I don't know why you think cheating would need to be involved. The only restriction on the people setting this up is that the vat you draw the balls from has the same constitution on the first choice as it does on the second. Other than that, they can pick any combination of balls they want within the rules.

And if it is a "matter of pure probability" then lets see some pure math to prove it. How do you get from "there are three posibilities, and we don't know the probability of any of them... therefore the probability is the average of them all"??? How are you calculating something from nothing?

The answer is that you aren't. In fact, the colors of the balls aren't subject to probability at all: they are chosen beforehand. Their probability at that point is equal to 1 (certainty). The only probability comes from your chances of drawing a ball GIVEN that situation. That you don't fully know what the situation is does NOT transform the actual odds in any way. The fact that there are three posibilities likewise is not a calculation of odds at all.

What does "against you" mean though? Since the balls are the same in both choices, and the choices opposite, they can't be against you both times in the same way in both games. If there is some meaning to "against you" then you'd be able to use it to increase your odds vs. when they are not against you. It's not clear to me how they could be against you in a general sense.

Now, it is true that by selecting A and D I am selecting two options with opposing risk profiles, but that doesn't seem to be a paradox. It simply states that when the average chance of winning is small people want to avoid the risk of not being in the game at all. But when winning is likely people are willing to risk a lower (but still high) chance to win as a tradeoff for a sure thing. Now, maybe this paragraph is equivalent to a more rigorous treatment of Elsberg's paradox than the Wikipedia writeup, but paradox still seems too strong a term.

The problem does not say that we get to "play" the first lottery before being offered the second, so I think we must assume that we do not. Nor does the problem say that we are starting over with three new balls before the second lottery, so I think it's fair to assume that we are using the same three balls. Since no lotteries have yet been played, I assume that the first time a ball is drawn it will determine the result of both lotteries.

Under those assumptions, not knowing the second lottery would be offered, I would choose A at the first offer, for the reasons stated early on. Sasha may not even have any white balls, so I would take the sure 1/3 chance.

When the second lottery is offered, I can choose C, which guarantees one win, because I will win A if red is chosen and C if red is not chosen, but I am guaranteed only one win - that is, I cannot win both A and C; the ball will be either red or not red. If I choose D, I could potentially win twice. If a red ball is chosen, I would win both A and D. If I was right about Sasha not having any white balls, then I would be guaranteed to win at least D and still have a 1/3 chance of also winning A. Of course, Sasha could have no black balls, in which case I would only have a 1/3 chance of winning, but my return would be double because I would win both A and D. Or it could be somewhere in-between if there is one black and one white ball. Under the stated terms of the problem, there is no way to tell what my odds would be.

Ultimately, I like prizes, and I don't like uncertainty, so I would be satisfied with one guaranteed prize rather than be greedy and take the chance of walking away empty handed.

Based on the information contained in the problem statement and

available to me,Ihave no reason to expect the black balls predominate over white balls or vice versa. (So, given the condition of my ignorance, the probability a particular non-red ball is black is 50%. This conditional probability is the one I, the decision maker, use. Players with "insider information" should use that information. It doesn't affectmydecisions making.)As everyone has pointed out, if the probability of black and white are equal, the expected value of the payout for case A equals that for case B. B would have a larger

standard deviationand I prefer lower standard deviations. Similar reasonsing applies to picking C over D.Isn't disliking large standard deviations on the rate of return actually some sort of principle in portfolio theory?

1. This problem has nothing to do with the Ellsberg "paradox."

2. Ellsberg was an idiot. It makes complete sense that most people would choose a 1/3 chance over an unknown 0-2/3 chance, and that those same people would choose a certain 2/3 chance over an unknown 1/3-1 chance. He assumes that people will determine in advance what they believe the odds to be, rather than (as all of the comments above show) choosing based on the knowledge that one cannot know what the odds are.

No, it just isn't. You don't know the probability, end of story. Having two indistinguishable but opposing options tells you nothing useful about them other than if you know the probability of one, you can also know the probability of the other. You don't know the probability of either option. I think more of what you are getting at is that if you flip a coin to determine your guess of black or white, then you have a 50% chance of getting it correct. But that result is trivial: it doesn't constitute any new knowledge about the situation and so cannot help you make a choice.

I mostly agree with DJR as far as I can tell. The best devil's advocate I do against him is that if you you don't know what the odds are for the other balls, then why the strong preference for the red choices in both cases? If you have no solid reason to pick red over white other than its a less ambigious an unknown probability, why not pick white both times?

That is, if the balls are red, white, and black, then the red option is just as good as the white and people should pick it as often as they pick the red. If either the white or the black predominate, however, then whatever you stand to lose in one choice you stand to win back in the other by picking B and D. So why don't an equal number of people pick B and D, since if they are truly indifferent it doesn't really matter? The chances may be unknown, but you DO know that the balls are the same in both rounds, so you can play one choice against the other, and if you prefer the less ambiguous option, then this can't be explained by utility or risk aversion alone.

I'm not sure that convinces me though. I don't know if I followed my own logic, and I'm not sure the math on the payoffs adds up. :)

A = 1/3

B = somewhere between 0 and 2/3

C = 2/3

D = somewhere between 1/3 and 1

Can't further define B and D without knowing the white/black proportion.

I prefer A and C. I see from the wikipedia that this is different from most people.

Why? Because the expected payout is the same in both cases. No matter what the odds are, both A/C and B/D represent the situation in which the probabilities in the two cases are just the reverse of each other. This is easier to see if we expand the question from 3 balls to the 90 balls of wikipedia, with 30 red balls, and an unknown number of black and white in the remaining 60. If we choose the white options, then whatever the probability is in choice one is reversed in choice two.

I THINK I've convinced myself that that is correct now. :)

All: I've posted a follow-up post.

This is no different from Sasha telling you that the two balls are each either white or black. The probability of one or the other is simply unknown to you. The stranger may have been the world's foremost planetary physicist, or he could have been insane, lying, or stupid. You simply have no basis upon which to guess, and so even from your perspective, the odds are not 50%.