Looks like my Ellsberg paradox post below was pretty popular — about two dozen comments just in the first hour, between 11 p.m. and midnight (Eastern)! I'll repeat the problem below, then give my explanation. If you haven't done so before, you may want to think about what you would choose before reading the explanation.

There are three balls. One is red. Each of the others is either white or black. Now I give you a choice between two lotteries. Lottery A: You win a prize if we draw a red ball. Lottery B: You win a prize if we draw a white ball.

Which lottery do you choose? (Mini-update: I allow you to be indifferent, if you want.)

Now I give you another choice between two lotteries. Lottery C: You win a prize if we draw a ball that's not red. Lottery D: You win a prize if we draw a ball that's not white.

Now which lottery do you choose?

UPDATE: Just in case you're confused about this — and apparently some people were — we're talking about the SAME THREE BALLS each time. I haven't changed the balls. Nor have I drawn any balls. We haven't conducted any lotteries in the time it took you to read this post. All there is is a single box of balls, and me asking you your preferences over lotteries. (END OF UPDATE)

UPDATE 2: You ask one of these questions, and you find out all sorts of aspects that you weren't expecting people to find important. This will affect how I phrase the problem next time, but for now, let me just clear up one extraneous aspect. I'm not running the lottery. I don't own the balls. I'm not offering a prize. Someone else, who isn't connected with me, is doing all that. I'm just asking questions about which lotteries you prefer. Also, as I mentioned in the first update, we don't draw any balls between your first choice and your second choice. In fact, we're never going to draw any balls. Why? I'm not running the lottery! I'm just asking questions! If you want to draw balls, take it up with the guy actually running the lottery, who is not me.

There are two points here, one theoretical and another practical. I'll give you the theoretical point now, and save the practical one for a later post.

If you know about expected utility theory, you can skip this paragraph and the next four. Expected utility theory assumes that (to simplify) when you're faced with lotteries over, say, amounts of money, and each amount has some probability attached to it, and you have a utility-of-money function U, you choose which lottery you prefer based on the lottery's "expected utility," which is a kind of weighted average of the utilities of the different possible outcomes.

So if I offer you $1 if a fair coin comes up heads, then the expected utility is 0.5 U($1) + 0.5 U(0). (When I say U(0), that means the utility of however much money you already have; when I say U($1), that means the utility of that amount of money plus $1.)

Usually we assume people are risk averse, meaning they prefer the certainty of 50 cents. That would be U($0.5). So you would express risk aversion by saying that U(0.5) > 0.5 U(1) + 0.5 U(0). A risk neutral person doesn't care, as long as the lotteries have equal expected value, so he's got a different function U such that U(0.5) = 0.5 U(1) + 0.5 U(0).

But whether you've got risk aversion, risk neutrality, or something else, expected utility theory always assumes that *only two things matter*: (1) The utilities of the outcomes and (2) the probabilities. No matter how complicated a set of lotteries I give you, you always reduce it to the ultimate probabilities over the outcomes.

For instance, consider the set of nested lotteries: Lottery A = [Heads you lose, Tails you get to participate in Lottery B]; Lottery B = [Heads you lose, Tails you win $100]. Expected utility theory says you crunch the numbers and figure out that this is identical to a single lottery where you win $100 with probability 0.25. Everything else is irrelevant.

Now consider the choice of Lottery A vs. Lottery B. Lottery A is the prize with probability 1/3. Lottery B is the prize with a probability that could be 0, 1/3, or 2/3. Whatever the true probability is (you can make assumptions where the ultimate probability is 1/3, for instance if each ball is black or white with a 50-50 probability — but it doesn't need to be that), ultimately you'll make some choice. Suppose it's A. Under expected utility theory, that can only be because you think red has a higher probability. If you think the probabilities are equal, then under expected utility theory, you *must* be indifferent between the two lotteries. If you choose B, under expected utility theory that can only be because you think white has a higher probability.

Now go on to Lottery C vs. Lottery D. If you chose A the first time around, that means you think P(R) > P(W). But then you have to have P(not R) < P(not W). That's just mathematically true because P(not R) = 1 - P(R). So you can't prefer C if you preferred A.

Nonetheless, most people chose both A and C. Mostly, they did so because the probability of R is a *known* 1/3, and the probability of not-R is a *known* 2/3, while the probability of W and not-W are kind of unknown. Note: This is *not* risk aversion, because the probabilities we're talking about aren't the probabilities of the ultimate prize. Rather, we're talking about the probabilities of what the probabilities are. This is called *ambiguity aversion*. Ambiguity aversion plays no role in expected utility theory, where only the ultimate probabilities (and the utility of the outcomes, which I've held constant here) count. Therefore, in this setup, *most people make choices inconsistent with expected utility theory*.

Is this good? Bad? Irrelevant? Does it illustrate the crooked timber of humanity? The uselessness of expected utility theory? Stay tuned.

Also, you didn't give an "indifferent" option.

Plus, I figured "indifferent" is always an option when I ask "which do you choose (and why)?", but maybe people have a tendency to read more into these things than I ask.

Finally, I didn't redescribe the problem between the lottery choices. Of course the balls are the same!

Nonetheless, if people are confused, people are confused. Who am I to judge. I'll make it clearer next time.

By the way, although 50/50 is not one of the given assumptions, isn't it the only reasonable assumption in the absence of additional information? If I tell you a coin can come up either heads or tails and ask you to choose one, shouldn't you be indifferent even if I don't tell you the odds are 50/50?

I wanted to follow up on the relationship between probability and information from the prior thread.

Let's consider a situation where you are picking one ball out of a bag with a number of balls, all either red or black. Let's assume you know that there are the same number of red balls as there are black balls. Am I right in assuming that you would say that in that case, there is a 50% chance of getting a red ball when you pick one ball out? Of course, in fact, you will pick one particular ball, and it will have one fixed color, so in truth, there is either a 0% chance you will pick a red ball or a 100% chance you will pick a red ball. Moreover, whether it is 0% or 100% is likely already fixed (depending upon your views of physics, etc.) even if unknowable well before you stick your arm in the bag. While we may be considering the important data here to be the proportion of red balls in the bag, the actual result is fixed not by that limited data, but instead by the state of the world.

I wonder whether we are confusing the "probability" tied to one, particular, actual draw with the expected measure of repeated instances of drawing? The example I set forth suggests to me that where we talk of what we know, we are talking statistics, and where we are talking what we do not know, we are talking probability. When we talk probability, words like "at random" mean "everything else being equal", and lionizing certain characteristics (e.g., the proportion of red balls) over others (e.g., the physical state of the world such that the draw of a particular black ball is truly certain) is anethema to probability.

Apparently the "paradox" consists in the empirical fact that people tend to prefer choices A and C. As I was indifferent in both cases, I saw no paradox.The interesting thing about this sort of stuff is that even though people 'know the answer' and can work out that they should be indifferent, they STILL have preferences.

I bet if you took a bunch of people who did the math and decided they should be indifferent, and then FORCED them to choose between options, ideally with actual money on the line, the vast majority would pick A and C. Our reason is quite strong, but our instincts are pretty powerful too. When we face brainteasers we have a trained response to 'reason it out' but when we face real-life situations we often don't. So 'ambiguity aversion' often really will describe the actual real-life behaviour of real people.

My boss in an old job had a PhD in psych, and part of his thesis work was proving something like this. Human beings have underlying heuristics we use that are still there even when we know the 'right' answer. He surveyed people about the gender of children. Essentially he gave them a couple who'd had a boy, then a girl, then a boy. He asked people to say whether they couple's next child would be a boy or a girl. Everyone knows that it's random. But when he forced people to pick one or the other, like 90% of them chose "girl", because of the heuristic of pattern repitition.

I would strongly suggest a clarifying edit, unless the point of the post is to leave readers confused until they read your explanation.

Perhaps one way to model this is to have a second utility-of-uncertainty function, which is a function of the amount of unknown information in the situation which will affect the outcome.

In lottery "A", the colors of the non-red balls do not matter; there are three possible outcomes to evaluate

(red, non-red, non-red)

In lottery "B", the colors of the non-red balls *do* matter, and there are four times as many possible outcomes to evaluate; this makes it a (computationally) more expensive bet.

The greater the amount of uncertainty in a situation, the more likely it is that we will make a mistake evaluating it, and thus we have a bias towards simpler circumstances.

The only part of this that's interesting to me is that it would drive me nuts despite the fact that I can calculate (or in this case, mess up but have someone correct me on) the actual odds, and realize that they're still the same anyway. But considering I make all kinds of allowances for "things that would drive me crazy," including obviously obviously less than useful things, like getting to class so early I have to wait for the other class to get out before I can get a seat (the fear of being late being great enough to make me be irrationally early, wasting study time and so forth,) I'm forced to conclude that a lot of irrational things threaten to make me nuts, and I'm basically okay with altering my behavior to accomodate that.

(the question is, are economists okay with a behavior theory that doesn't really model basic decision making in this situation, for which it seems on its face to be ideally suited...)

What I think most people confuse this with is guessing at random. If we don't know the probability of the ball in the bad being black, and it could either black or white, we can still get to a "50% chance of getting it right" by flipping a coin to pick white or black. That coin flip will be correct 50% of the time no matter what the probability of the ball being black is. That result is, of course, completely trivial, and that's why it's important to distinguish "I have a 50/50 shot of blindly guessing the right answer" from "there is a 50/50 chance of it being this particular answer."

You often run into this issue with creationists making arguments about physical events in which they are unsure of the probability. They work this by creating binary options and then deciding that the probability of either one happening is 50/50. This, of course, is complete nonsense, especially in cases where the probability of certain events having happened is already known to be 1. :)

UA = 1/3

UB = (1/4)*(2/3) + (1/2)(1/3)+(1/4)(0) = 1/3

UC = 2/3

UD = (1/4)(1/3) + 1/2 (2/3) + 1/4 (2/3) = 7/12

Of course, if you apply Laplace's principle of insufficient reason differently and Assume the WW,BW and BB are equally likely with (1/3) apeice, then come up with

UD = 1/3(1/3) + 1/3(2/3) + 1/3(1) = 2/3

As my statistics prof once remarked, Laplace's principle of insufficient reason is insufficient.

I vote for illustrating the uselessness of utility theory :P As I said before, aside from the things it ignores that are listed above, it also ignores the fact that not everyone can calculate that the odds are the same, heh."

Again, I think a really important insight here is that we can't know anything about the probability of black vs. white. However, the key is that that doesn't mean we can't know anything about the probible outcome of betting on black and then betting on white! We may not have enough information to judge the expected outcomes of the B and D options of the individual trials, but we DO have enough information to judge the expected outcomes of the patterns of trials when they are linked!

But I think that when you look at what is going on inside people's heads, its the same thinking: Choice A has a certain outcome. Choice B has an uncertain outcome. In ambiguity aversion, it's just really, really uncertain. :-)

You could, if you wish, simply add another layer to your model, and turn it back into risk aversion. Simply assume that people form some space of possible probabilities in their heads, and then implicitly create some probability function on that space.

I'm not sure I understand. Wouldn't it follow also that "the issue in this case is that there is some process for determining [exactly which color ball will be drawn on this particular draw], but we don't know it"?

Regardless of the set up, I am going to draw a certain, fixed color ball on any one particular draw. Hence, as I wrote earlier, the "probability" in the most informed sense is either 0% or 100% of drawing a red ball, and in no way is the informed probability 50%. We can get to 50% only by restraining probability judgments to the world of incomplete information and acknowledging that, when talking probability, everything unspoken is considered irrelevant, no? (This last sentence is the key point, it seems to me.)

So, if we really have no information on which to judge the Creationist's argument other than the listing of possibilities, I think the Creationist is right to say that the one is as likely as any other. However, if we can start discussing the various possibilities in terms of "evidence", those probabilities change.

UD = (1/4)(1/3) + 1/2 (2/3) + 1/4 (2/3) = 7/12

should be

UD = (1/4)(1/3) + 1/2 (2/3) + 1/4 (3/3) = 2/3

If I didn't make a mistake, the standard deviations for A equals that for B. The SD for C=SC for D! And the equality holds for all higher order moments too. (Wow!)

So, assuming I didn't screw up the calculation, I can't pick based on expected value, standard deviation or higher order moments.

Given this, I look at something else: The possibility of cheating by my "opponent". If I pick A over B and C over D, the house can't cheat or favor. In real gambling, if you let one side cheat, those that do cheat are likely to cheat in their favor not mine.

That's why you have one person shuffle and another person cut the cards when playing poker.

(This might all change for tv game shows if I suspect Monte opens the first door and is biased in some way -- possibly because concealing the best prize is good for ratings.)

Indeed. That process could be completely random. Or it could be completely determined by something. We don't know.

"Regardless of the set up, I am going to draw a certain, fixed color ball on any one particular draw. Hence, as I wrote earlier, the "probability" in the most informed sense is either 0% or 100% of drawing a red ball and in no way is the informed probability 50%. We can get to 50% only by restraining probability judgments to the world of incomplete information and acknowledging that, when talking probability, everything unspoken is considered irrelevant, no?"

Right, but don't confuse the drawing of the ball with the selection of the balls! They are two different events which we can generally describe in two different ways. In the case of picking the balls, we might be saying that the picking is strictly and deliberately chosen (everytime we get the chance to pick, they pick black) while the situation with drawing out a ball might be random in the sense that our proceedure ends up with statistically random choices of balls over time.

"So, if we really have no information on which to judge the Creationist's argument other than the listing of possibilities, I think the Creationist is right to say that the one is as likely as any other. However, if we can start discussing the various possibilities in terms of "evidence", those probabilities change."

No, again, not in the way they are being used. In the creationist sense, the scheme is to break some sequence of events down into lots of binary branches with unknown probabilities and then keep adding together probabilities as if they were all indepedent. The end result is some astronomically unlikely value for some particular physical event.

The problem is that first of all if we don't know that the probabilities are indepedent of each other, then we can't treat them as such: doing the math properly requires finding that out. Second of all, just because there are only two possiblities (one thing happens, or it doesn't) doesn't mean that the only options for the probability of the event are 0% and 100%, and in the abscence of evidence we split the difference. When we speak of the probability of a binary event, we generally mean statistically, not fully deterministically (if everything is deterministic then the probability of EVERYTHING is either 1 or 0, never 50%). It's either going to rain today or it isn't, but we have a 33% chance of rain. So we could be ANY percentage value of likihood on a scale. Why have your default assumption be set to the midway point as opposed to anywhere else on the scale? Why have it set anywhere in the abscence of information? Where does the math come from?

The answer is that we don't know what sort of process we're dealing with, and we haven't averaged it out by testing it: so there is no math to do, neither statistically nor deterministically. And trying to pretend we have information, let alone a specific value that we are going to use in an straight out probability equation to calculate a larger probability is especially deceptive! We're taking a number we just made up with a arbitrary value and plugging it into an equation as if it were a real calculate probability. Garbage in, garbage out.

Then you'd be applying it incorrectly.

In which case you'd be applying your incorrect reading of Laplace incorrectly, too. Were WW, BW and BB equally likely, they'd have probability 1/3*2/3 = 2/9 apiece, not 1/3. There's still one red ball in there.

Lots of people still seem to be trying to calculate expected value for each one of the four choices individually. I maintain that that is simply impossible to do, but even if I'm wrong, that's irrelevant, because that's the wrong calculation in any case.

Surely that's handled by the statement that the same set of balls is to be used in all lotteries? Sure, the house could lie about that, but that applies equally to A and C.

One is "regret aversion." You don't want to pick B, say, and then discover that there was no white ball and you never had a chance. It is much more acceptable to pick A, and then shrug off a loss with the idea that you only had a 1/3 chance anyway. Even if it turns out that both other balls were white, you probably would not feel too bad, because you still had a 1/3 chance.

Say then that we could wager over whether or not it has a probability of 50%. If you complained that you had no information and flipped a coin to determine what side you'd bet, you'd have a 50% chance of winning the bet. But... we could also do a similar bet over whether or not it's 33% likely or not. Again, if you flipped a coin, you'd get a 50% chance of being correct. All the 50% is telling you is that you have a 50% chance of randomly guessing whether some percentage is the correct probability, which is just a trivial way of saying that there are two options. It isn't telling us anything about the probability of them happening, just that randomly guessing ANY given figure for the probability will be right half of the time.

This is a perfectly sound rationale that holds up if and only if there is only one round. But in the question asked, there are two rounds, and if you are averse to regret then you have the perfect solution to your problem!

"It isn't telling us anything about the probability of them happening, just that randomly guessing ANY given figure for the probability will be right half of the time."

That should read:

"It isn't telling us anything about the probability of them happening, just that if there is a bet on the correctness ANY given figure for the possible probability, you have a 50% chance of winning the bet if you pick randomly."

That the balls are the same in the two lotteries is irrelevant: the information available to the chooser in the second lottery is different, substantively, than the chooser in the first lottery. The chooser in the first lottery does not know that the offeror will offer the second lottery; the chooser in the second lottery does. This new information can change the prior probabilities from the chooser's perspective.

In conclusion: no paradox.

Here is a very similar example. Suppose offeror shows chooser a box with either a gold bar or lump of goal concealed inside. Offeror offers to sell the box to chooser for $50,000. Chooser forms some probability distribution that box has bar or coal, say P(bar)=p and P(lump)=q.

Now offeror offers to sell the same box to the same chooser for a quarter. Based on this offer, chooser can, without paradox, update his belief in the probabilities that a bar or lump will be in the box. This updating might in turn lead to a different decision under utility theory, but that is not necessary for this particular example.

In conclusion, you have failed to consider my analysi in the former thread.

--Sky Masterson, Guys and Dolls

Choices B and D give you a higher probability of having water in your ear.

That's not part of the question and its anyways irrelevant. It doesn't matter what the probability of the balls turns out to be or what you suspect it to be based on whatever information you convince youself you have.

People try this same out with the Monty Hall dilemna: trying to pretend that the situation is presented as unfolding with uncertainty about what will happen. That's not the case. Monty ALWAYS opens a door with a goat on the other side, no matter what you pick. You ALWAYS get to express your preferences in both lotteries, and we don't even have to consider actually holding the lottery at all to see the paradox in most people's choices.

Excellent!! I was bored until I read that. There was a point after all!

Question 1: is X > Y

Question 2: is not-X > not-Y

If your answer to the two questions is the same, you have a problem.

I'm not clear on how this is analytically distinct from risk aversion. People would rather buy a ticket whose expected value is known than a ticket whose expected value varies within some set range.

Andrew Edwards wrote:

He surveyed people about the gender of children. Essentially he gave them a couple who'd had a boy, then a girl, then a boy. He asked people to say whether they couple's next child would be a boy or a girl. Everyone knows that it's random. But when he forced people to pick one or the other, like 90% of them chose "girl", because of the heuristic of pattern repitition.

As an aside and of no relavence to the paradox there are 108 male babies born for every 100 female babies. So that is the way to bet.Now I am offered the second choice, having already selected A for my first choice. (I am assuming that you are waiting till the end to actually run the lotteries--if I know what I drew the first time, that throws everything off.) Well, my reasoning remains as above. It is possible that the house has stacked things in its favor, and less likely that it has stacked things against it. Moreover, I selected A the first time, and I suspect that the house knows that that was likely. Then the most logical thing for me to do is select C. Why? Because the house, knowing that most people would (logically) pick A the first time, might stack the lottery with white balls. While I didn't know there would be two lotteries, the house did, and I have to assume there is some probability that they acted accordingly. In that case, picking C means that I can't be cheated by the house.

The situation changes entirely if I answer it knowing that I will be participating in both lotteries. In that case, as has been pointed out, there are four possibilities. RWW, RWB, RBW, and RBB. In the first option, picking B &D would be In the first three options, picking A &C or B &D are equivalent: one lottery with a 1/3 chance of winning, and one with a 2/3. However, in the fourth option, You are guaranteed to win 1 and lose one lottery. For this reason, with full knowledge of both lottery setups, I would choose B &D. (I'll deal with A &B or C &D later). But, you tell me, the expected values are the same in all cases. But, by picking B and D, I increase the chance that I will win one lottery, at the expense of reducing the chance that I will win two lotteries. I like to win. Winning twice is better than winning once, but, for me, losing twice would be a let-down. therefore, I've chosen the options that reduce my chances of losing twice. Note that this reasoning applies whatever the house does--they can't cheat me. At worst, they can avoid the RBB mix, and my probabilities become the same as if I had picked A and C.

Choosing A and B or C and D gives significantly different probabilities depending on the distribution of balls. I consider choosing either of those combinations to be riskier, since there is the possibility that the house might out-think me. I've avoided them.

It is a reasonable assumption that minimizing the amount of money maximizes the utility of the offeror. Thus, choosing A is rational, since this is a zero sum game in which any utility increase for the offerer is a utility decrease for me. (Once the offer is made, the $1 a sunk cost.)

(We will neglect that the first lottery would provide information about the contents of the bag at this point, because that changes the analysis to be fundamentally less interesting.)

When the second lottery is offered, the information space changes. I can still reasonably assume that the offeror is attempting to maximize utility, but the question of how to do that has become more complex. I can further assume that the offeror is more experienced in understanding maneuvers in this complex information space, since he clearly has some motive for these strange offers. Since the offeror thus has a better understanding of the space, I should choose to reduce ambiguity to reduce the advantage of understanding possessed by the offeror.

Note that this is pretty directly analogous to suggestions that naive or uninvolved investors buy mutual funds rather than individual stocks. Such an investor is at an information disadvantage to the serious market player, so reducing ambiguity is entirely rational.

Thus the expected utility is higher the lower the standard deviation of the expected probability distribution.

$0.50 and $1 are so close to zero that my point won't make sense, but consider $500M and $1B. Is $1B really worth exactly twice $500M? Not to me. Getting $500M would mean I quit my job, move to paradise and never worry about money again. I'll have a life of luxurious leisure, and only hard work when I feel like it. Getting

another$500M would be cool, but not as big a transformation. I can't quit my job again, I can't move to paradise again, and I can't worry about money any less. I could get a bigger estate, but I can only be in one room at a time. I could get a faster private jet, but there's only so many hours in a day to save by going faster.So $1B is worth less than twice as much as $500M is, not in financial terms (obviously) but in terms of how freaking awesome it would be. Choosing a guaranteed $500M over a 50% chance at $1B would be entirely rational, not some psychological quirk.

I don't think the statement that the same balls are used automatically prevents the house from having a preferred cheating strategy. Say the house "cheats" by being biased in their favor:

Case 1: They pick RBB. My choices:

A: I have a 33% change of winning. (One red/ 3)

B: I have a 0% percent chance of winning. (0 white/ 3)

C: I have 67% chance of winning. (2 not red /3)

D: I have a 100% chance of winning. (3 not white out of 3)

Case 2: They pick RWW

A: I have a 33% chance of winning. (One red/ 3)

B: I have a 67% chance of winning. (Two white/ 3)

C: I have a 67% chance of winning. (2 not red/ 3)

D: I have a 0% chance of winning.) (0 not red/ 3)

I'm not sure about the time sequence of this game and the sequence that information is revealed-- but I'm assuming I don't know what ball I drew in game a/b. Anyway, if I'm told I get to play both games and the balls will be the same in both, and I'm required to pick my strategy for both before drawing, I pick A/C over A/D. (Assuming this is real gambling.)

Why do I pick this? Because to first order, my opponents strategy for cheating is to favor RWW's over RBB's. RWW is better for them if I flip a coin to pick. RWW is better if I systematically pick A/D rather than A/C. RWW is no worse for them if I pick A/C.

The only way they'd be worse off always picking RWW is if my strategy is to always pick B/C. To pick that consistently, I'm gambling that they do pick RWW vs. RBB to "bias" in their favor should people pick based on coin flips or based on first order reasoning.

However, at this point, if I am certain they "cheat" and somehow know my reasoning, the whole game becomes more complicated because now they know I pick B/C because they know I think they cheat. So, they counter cheat!

Once everyone is thinking of the biases in picking white vs. black whole games gets too complicated to analyze in the time frame of gambling, and I'm better off just picking A/C.

(Of course, I'll likely have overlooked something. But, I am going for A/C right now!)

Jake (guest): Here's how it differs from risk aversion. I didn't give any probability distribution for white vs. black, but let me add some facts and come out and tell you that each ball is white or black with 50-50 probability. (I'll ignore the whole debate about whether it's reasonable to assume 50-50 even without this fact.)

If it's 50-50, then the probability of white is actually 1/3. A and B (same with C and D) actually have the same expected value, the same variance, the same everything. They're identical in all respects that are relevant to expected utility theory. (Expected utility theory, as I said, simply excludes everything that's not the probability and the utility of the outcome.)

Even then -- even when I've explicitly made the white vs. black probability 50-50 -- most people will still choose A and C. Risk aversion can't account for that, because A and B are both "prize with 1/3 probability" lotteries, and C and D are both "prize with 2/3 probability" lotteries. The reason they do so is that I've expressed the red probability as exactly 1/3, while I've expressed the white probability as being 0 with probability 1/4, 1/3 with probability 1/2, and 2/3 with probability 1/4. This is just a sort of framing effect.

But provided I'm not the person running the show, and I'm only asking you questions -- and not letting you actually play or get anything -- I'm not sure how that matters.

A paradox is like the following. A lawyer agrees to train others to practice law, and will take his fee when the student wins his first case. A student goes through the program and then never practices. The lawyer is upset that he hasn't gotten his teaching fee, so he sues the student. He argues that if the student loses, he should get the fee. And that if the student wins, he will have won his first case and he should still get the fee. The student argues that if he wins, he won't have to pay, and that if he loses he still will not have won his first case, so he should not have to pay. What outcome?

Anyone care to show how anything in the Ellsberg Paradox actually is a paradox. I'm afraid that this paradox is to paradoxes what the social sciences are to science.

He's not in the play."They're identical in all respects that are relevant to expected utility theory. (Expected utility theory, as I said, simply excludes everything that's not the probability and the utility of the outcome.)It looks like expected utility theory is not very good at predicting how people decide when they reason that the probability and the utility of the outcome are the same.

As to the 50%-50% probability issue: I thought we were supposed to consider the possibility the probability of white vs black wasn't 50%-50%, but just didn't know how the probabilities might be set?

But yes-- in the case where we state the probability of black vs white is 50%-50%, which I'll call a "fair game", I still pick A/C. This is

notbased on utility theory which says A and B are equally good and C and D are equally tool. It's not based on risk aversion because -- unless I'm mistaken, in a "fair game" the standard deviation of the prizes and all higher moments are the same. So, the risk is the absolutely positively the same.So, the question becomes why do I, and evidently many others, pick A/C even after we figure out it has no advantage based on utility or risk aversion?

All I can think is that,

deep down, I, and others, don'tactually believethere can't be any "cheating" by the other side.Or maybe we like to behave in ways that make others know they will need to think a while before "cheating" us. In life that has utility, even if it means nothing interms of portfolio theory!

And feel free to respond somewhere on my blog: the-parallax-view.blogspot.com

In that case, for the reasoning I posted above, I definitely chose B and D. The reason is that in three of the cases, I can expect identical results: a 1/3 lottery, and a 2/3 lottery, and for the fourth case, I can be assured of winning one and losing one. Choosing A and C give me four identical options, a 1/3 lottery and a 2/3 lottery. By choosing B and D, I minimize the probability that I will win nothing, while not changing my average expected winnings.

Choosing A and D or B and D would, I believe, also increase the possibility that I would win both or neither lottery, while not changing the expected winning, assuming even probabilities on black and white balls.

So, I'm puzzled. If I am told that a ball in a box could be white or black, I am supposed to assigned a 50% probability to its being white and 50% to its being black? Otherwise, you can't calculate the probabilities in this case. But I don't see what the warrant is for assigning those probabilities.

Without a warrant for assigning the probabilities there, I can see why one would go with what one does have knowledge of: that there is a 1/3 chance that the red ball will be chosen. The chance of getting a white ball is inscrutable, as is the chance of getting the black ball.

Out of curiosity, was did anyone actually think anyone was running an actual lottery? I'm not convinced many failed to recognize this thought experiment was used to discuss an abstract point!

If heads comes up you win, if tails comes up you lose. In lottery A, you use a dime. In lottery B, you use a quarter. Which do you prefer?

I'm gonna call this one the Duffy non-Paradox.

I bet you some people have preferences about dimes or quarters when flipping a coin. Why they prefer the one over the other will have nothing to do with their chance of winning. It may even be that a majority of people prefer to flip quarters. But you have to say more than this before you could convince me that its interesting.

But, and this is important, while you can't know the probability of black and white, you DO know that the probabilities are direct inverses of each other. In short, you can be VERY certain what the combined average probible payout of the two different B and D trials will be.