Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Monday, February 25, 2013

Botanizing Probability

by Ian Pollock

A bearded stranger approaches you on the street and starts pulling objects out of a sack. The first three objects he pulls out are (1) a large tube of toothpaste, (2) a live jellyfish, (3) a small tube of toothpaste. Now, what is your level of credence that the next object pulled out of the bag is a tube of toothpaste? How much are you willing to bet on toothpaste, and at what odds?

This is how Adam Elga begins his excellent paper, subjective probabilities should be sharp, and it seems a fitting introduction to an odd little problem in philosophy of probability. [1] What are we to make of a situation like the above?

As Elga argues in his paper, we might start by trying to make a distinction between three relevant situations in which a probability judgment is requested of us. The evidence can be, in his words, sharp (for example, the probability that it will be 20°C or higher in Winnipeg on March 15, for which a farmer’s almanac or weather forecast should be just fine), or sparse but with a clear upshot (for example, the probability that the next person in the UK to be named Suzanne will be born at an odd hour of the day — we have no idea when this birth will take place, but 50% is the way to bet).

But sometimes, as in  the above example of the jellyfish/toothpaste draw, evidence is sparse and unspecific. Assigning a probability to toothpaste would require theorizing about the stranger’s motives as well as the bag’s contents. But both are utterly mysterious to us. There is a temptation here to refuse to make a probability judgment, or to say that the probability is “vague” or “in some interval.” Elga’s paper goes on to argue that this last proposal is not really acceptable (it ends up implying that you should reject a Dutch book — a sequence of bets guaranteed to make you money no matter what the outcome).

This is only the most extreme example of the strange effects of “vagueness” on probability judgments. In a classic paper, Daniel Ellsberg goes over some other situations, such as the following:
Urn A contains exactly 50 red balls and 50 black balls.
Urn B contains 100 red or black balls, but you don’t know the relative quantities. It might be 50-50, 0 red and 100 black, 100 red and 0 black, or anything in between.
You are offered two tickets. Ticket A pays \$100 if a red is drawn from Urn A. Ticket B pays \$100 if a red is drawn from Urn B. Which would you be willing to pay the most for?
This is one instantiation of the Ellsberg paradox. A moment’s thought should show that your probability on drawing red should be 50% (1:1 odds) in both cases. But we see that there is an intuitive tendency to prefer the wager in which the “dynamics” of the problem are known (Ticket A).

But wait! Suppose we keep everything the same, but Tickets A and B pay out if a black ball is drawn. Symmetrically, Ticket A would still be preferred to Ticket B, since Ticket A is less “vague.” But then your inferred probability for “red from Urn B” must be less than “red from Urn A,” while at the same time, “black from Urn B” is less than “black from Urn A.” Since the probabilities from Urn A must sum to 100%, this means your probabilities from Urn B are summing to less than 100%. This earns you a special place in the 3rd circle of Bayes Hell.

So this preference for “non-vague” bets leads to absurdities if we attempt to interpret your betting behavior as probability. This is a problem, because many philosophers, most notably Frank Ramsey, have wanted to enshrine “whatever determines betting preferences” as a sort of operational, extensive definition of probability.

My question is fairly broad: how can we make sense of what is going on in cases like these?

First, I will declare an interest: I do not share the intuition that prefers the known urn to the unknown one — I am completely indifferent between the two bets. That being said, I have a few ideas about why some people might prefer the bet in which the urn contents are known. Ranked in order of plausibility (although they are not mutually exclusive), they are:

1. You would feel stupid if you chose Ticket B (the unknown urn) and it turned out that there were only black balls in Urn B. You have a strong preference not to feel stupid, so you’re willing to pay more for Urn A, which is guaranteed to be a “fair” urn. [2]
2. You have been conditioned to think of a probability as a property of a situation, rather than a property of an epistemic state. (This is encouraged by our conventions of language, as in “the probability of rain tomorrow is 20%,” which uses the definite article “the” and thus implies a single uniquely correct value of probability, independent of what anybody knows about it. I prefer to phrase these things as “I give odds of 4:1 against rain tomorrow.”) For this reason, you feel that you do not know the probabilities for Urn B, and so you do not wish to bet on it.
3. You are wary of being tricked by whoever is holding the draw into betting on a lame horse. An urn with unknown quantities of red and black seems like a potential trick.

If I am right, then in my opinion only one of these objections is defensible (number 3). But they do seem to me to do a half-decent job of explaining away this intuition.

But what about the toothpaste/jellyfish draw? Here, I do share the feeling that says there may be no uniquely correct way to proceed. However, I think I may be able to demystify where that intuition is coming from, and what we should make of it. Give it the old college try, anyway.

For me, the interesting thing about the toothpaste/jellyfish draw is that my thought process when I consider it is very unstable. My stream of consciousness looks something like this:

“Okay Ian, two toothpaste tubes and one jellyfish. What do we expect next?”
“I don’t know. How long is a piece of string?”
“What about Laplace’s rule of succession? (s+1)/(n+2). Defining success as toothpaste and non-success as non-toothpaste, we get (2+1)/(3+2)=3/5 probability of toothpaste.”
“Yeah, I’m not sure that that is even applic - SQUIRREL!”
“Pay attention! How much would you be willing to pay for a ticket that paid out \$10 if toothpaste was drawn?”
“I dunno, maybe I’d give \$2.”
“Okay, so that implies your odds are 4:1 against toothpaste.”
“I think that reflects the triumph of curiosity over thrift, more than it does any real probability judgment. I would not pay \$200 for a \$1000 ticket. Or maybe I would.”
“Look, just answer this: how likely is toothpaste? You can see that 2 out of 3 things pulled out of the bag have been toothpaste. That is evidence that toothpaste is common in the bag.”
“If you say so. Do we even know that this is a random draw? Maybe the guy draws whatever he wants to, and that depends on how I bet. Why is he even performing this draw? He’s probably trying to trick me.”
“Trick you into betting for toothpaste, or against toothpaste? By the way, what’s wrong with the Laplace’s rule approach, again?”

Clearly, if one’s probability judgments are jumping all over the place from second to second and minute to minute, the smart thing to do is to refuse to either quote them or act upon them until they settle into a stable range (if they do so at all). It’s not that our probabilities are “vague” or “unsharp” in this situation. It’s that we are unable to assign them at all due to our severe epistemic limitations. “Vague” describes the phenomenology of such a situation (we feel a sense of confusion about how to proceed), rather than the actual answer one should give.

I think I can get even more specific on this. Consider that there are two types of uncertainty we encounter in our lives: empirical uncertainty, and logical uncertainty. Empirical uncertainty is not knowing who won the Superbowl. Logical uncertainty is not knowing the cube root of 74,088.

The important thing to notice about logical uncertainty is that it is relative to our abilities to draw deductions from knowledge in a timely way, rather than relative to our knowledge itself. The most brilliant mind ever to have lived can sit in an armchair all day, and not be able to tell you who won the Superbowl. But they ought to be able to tell you the cube root of 74,088, given enough time. In fact, I should be able to as well. The cube root of 74,088 is *implied knowledge*, like most of mathematics and some of philosophy. In a sense, you already know it, because you know other things which logically entail it.

What’s the probability that the cube root of 74,088 is greater than 35? This is a strange question precisely because there is no empirical uncertainty about the answer at all. But probabilistic reasoning only works well with empirical uncertainty, not logical uncertainty. Actually, correct use of Bayes rule in a sense assumes “logical omniscience,” wherein you are perfectly able to comprehend the logical implications of all hypotheses under consideration.

Thus, my tentative resolution of the jellyfish/toothpaste paradox is that its paradoxical nature appears because we are not logically omniscient, and the situation into which the thought experiment places us is both empirically and logically intractable. It is as though our inner calculator had a stack overflow. We are used to probability judgments under empirical uncertainty but logical certainty — for example, the chance of getting tails on a coin flip. The introduction of severe logical uncertainty makes us reluctant to try to calculate any probabilities at all (we just don’t know where to start analyzing the situation) — hence the intuitive appeal of an evasive answer to the jellyfish/toothpaste draw.

Does this seem satisfying to you?

_________

[2] E.T. Jaynes tells the story of an objection to the US draft lottery; according to the plaintiff, the drawing was not “truly random” because the bowl containing names had not been sufficiently mixed. Asks Jaynes, “To whom is it unfair?”

1. As in the described scenario the money I bet would be dream money, I'd bet everything I had in my dream pocket. I know it's a dream because the jellyfish the guy conjured from the sack was alive... ;-)

Cheers
Chris

2. The best bet is not to bet at all!

=

1. Unless the bet is a Dutch book! (See Elga's paper.)

2. Unless the bet is absolute.

Probability has lead science terribly astray.
They are so lost in their own self-made uncertainty that only the light of truth will show them the Way. But where is that truth philosophy, have you found it yet?
Mankind has been waiting since Socrates and the boys, over 2000 years.
Truth anyone?

Its time to straighten out not only science but also the grey area of justice, philosophy, as well as the multitudes of the faithful.

Truth or absolute is much more simple than thought,
Perhaps One day you will allow me to show you the Way.
I'll bet the Universe on it!

=

3. My point above is that the way the problems in the post are phrased, IMO the solutions are obvious. The toothpaste/jellyfish situation is impossible (neither empirical nor logical uncertainty), whereas in the Ellsberg's paradox case the reasonable real world assumption is your #3 - i.e. you have nothing to gain if the test is fair (both are 50%) but you lose if it's con, so the rational choice is Urn 1. (In a con, the sum of probabilities actually is <1.)

If you rephrase the bearded guy scenario to something that is actually possible, it becomes a variant of Bilbo's riddle.

1. >...in the Ellsberg's paradox case the reasonable real world assumption is your #3 - i.e. you have nothing to gain if the test is fair (both are 50%) but you lose if it's con, so the rational choice is Urn 1.

The thing that bugs me about the "it's a con!" answer is that you don't know *in which direction* it's a con - black or red. So the situation is more or less the same as the 50%-50% draw.

2. Obviously the con is always in the direction that makes you lose (*)

You wrote:
"Suppose we keep everything the same, but Tickets A and B pay out if a black ball is drawn. Symmetrically, Ticket A would still be preferred to Ticket B, since Ticket A is less “vague.” But then your inferred probability for “red from Urn B” must be less than “red from Urn A,” while at the same time, “black from Urn B” is less than “black from Urn A.” Since the probabilities from Urn A must sum to 100%, this means your probabilities from Urn B are summing to less than 100%."

But this is an error: The scenario where red pays off lies in a different probability space than the one where black pays off, and probabilities from different probability spaces don't have to add up to 1.

(* In principle, it is also possible that the con is in your favor. For example if you know that the people running the lottery have an interest in making you win, it would be rational for you to prefer urn B. But barring such prior knowledge, the scenario where the con is against you dominates)

4. An "empirical" approach to providing solutions to problems like this is to write a simulation program (in this case of random bearded men with a sack), aka a Monte Carlo Method. This is done, for example, for the Monty Hall Paradox. Then see what data comes out. To be cool, use a real random-number generator (e.g. www.fourmilab.ch/hotbits/).

1. You can't write a simulation unless you know the rules (logical certainty). In neither the sack nor the urn examples do we know the rules.

You might say that we know the rules in the urn example, but Ian didn't specify how the population of red or black balls is chosen - he just said there is some number of red and some number of black. We don't get to assume that any possible mix is equally likely.

2. You can't write a simulation unless you know the rules (logical certainty).

In both the sack and urn examples we are ignorant of the rules.

If you're assuming that any mix of red and black balls is equally likely, then that's an assumption that is not specified in the problem. The process by which the mix is determined is unknown.

5. Ian,

Very good article. I've run into the Ellsberg Paradox before. The Jellyfish was new to me and I think you summed it up nicely. I particularly liked the comment, "Why is he even performing this draw? He’s probably trying to trick me.” -- which never get discussed much during proposition bets. Much of the Ellsberg paradox has to do with people preferring a "sure thing" vs uncertainty (e.g. preferring \$5 for certain over a 50-50 chance at \$10). I think this is related to trust. If you were to offer people a \$20 bill in exchange for a \$10 bill, I think most people would at least hesitate -- thinking that you must be up to something (possibly giving them a counterfeit bill, or just trying to get them to bring out their wallet so that you can grab it and run away). But the "trust" issue is never discussed much.

Also not discussed much is the utility of the dollar/ruination: would you be willing to gamble with Bill Gates at \$100,00 per trial -- even if Gates gave you favorable odds?

Regarding your note #2, ("to whom is it unfair?") wouldn't the answer depend upon the method used to load the names? For example, if the urn was loaded in alphabetical order of last name with the Zs on top, and the urn was insufficiently shaken, then it would be unfair to people with last name beginning in "Z".

But I get the point: if you don't know anything about the bias (in which direction was the probability tilted) then a randomly directed bias should be considered as "fair" as an non-biased draw.

1. >Much of the Ellsberg paradox has to do with people preferring a "sure thing" vs uncertainty (e.g. preferring \$5 for certain over a 50-50 chance at \$10). I think this is related to trust.

Good point. You might be right that it's related, in part, to trust - but if you read Kahneman you'll see that risk-aversion generalizes to essentially all human decision making, whether or not it involves interactions with others (and hence trust). For example, I am very risk averse in my fiction reading - I usually stick to just a few good authors because it annoys me so much when I spend time on a mediocre book.

However, risk aversion may be a matter of a heuristic that evolved for reasons to do with trust, and was then more universally applied... I'd have to think about it more.

>Also not discussed much is the utility of the dollar/ruination: would you be willing to gamble with Bill Gates at \$100,000 per trial -- even if Gates gave you favorable odds?

Right, absolutely. As you say, accounting for the marginal utility of money solves that problem. On the other hand, I do think that humans are inherently just too risk-averse, even accounting for marginal utility of dollars. Kahneman gives the example of a bet on a coin flip of \$100 at 2:1 odds, which most people pass up (unless it is repeated). He recommends the technique of reframing each bet as one item in a "portfolio" of investments, which works well for me.

2. Ian ~ When you say "read Kahneman", are you talking about the book "Thinking Fast and Slow", or some paper?

Haven't had a chance yet to read Frank Ramsey -- but it's on my list!

BTW, thanks for mentioning "Laplace’s rule of succession". I had been trying (unsuccessfully) to remember that in connection with another thread regarding extinction/how long things last -- it is what this old feeble brain was trying to recall!

3. I think this is all in "Thinking Fast and Slow," although I'm sure there are other papers containing it as well.

6. Hi Ian,

Yes, this seems satisfying!

I enjoyed your article and agreed with pretty much everything you said. Before reading your alternatives on how to resolve the paradox, I had pretty much decided on your explanation [3] so was disappointed to see that your inclusion of it had taken away any opportunity for me to add anything to the conversation.

If I had to nitpick, I'd say that explanation [3] is just a special case of explanation [2]. You don't know the probabilities for Urn B because you don't know the process by which the balls were chosen, and it is possible that this process was designed to cheat you.

1. Thanks, glad this approach works for you!

>If I had to nitpick, I'd say that explanation [3] is just a special case of explanation [2]. You don't know the probabilities for Urn B because you don't know the process by which the balls were chosen, and it is possible that this process was designed to cheat you.

Ah, but in the case of explanation [2] (without cheating), your ignorance about the process is *symmetrical* with respect to how many balls there are of each colour.

To see this, suppose I outright *told* you that Urn B was filled with either 100% black or 100% red. It still seems to me that Ticket A is exactly as good as Ticket B.

2. >I outright *told* you that Urn B was filled with either 100% black or 100% red. It still seems to me that Ticket A is exactly as good as Ticket B.<

My point is that without knowing the underlying probabilities of either alternative, you are not justified in assuming that either is equally likely. It may be that black balls are more common than red ones or that the distribution is skewed for other reasons.

Suppose I outright told you that I was holding behind my back either a baby velociraptor or a stuffed toy, and offered to let you bet me that it was the velociraptor?

[3] is a special case of [2] because we don't have the option to choose whether we bet on red or black. Instead we are offered the chance to bet on a red ball. Since by [2] we don't know the underlying rules or probabilities, it is likely that the urn is much more likely to contain black balls because that would be of benefit to the party who set up the bet [3].

3. @ D. Me:

"If you call a tail a leg, how many legs has a dog? Five? No, calling a tail a leg don't make it a leg." -- Abraham Lincoln

I never liked that quote from L. because we are ASSUMING for purposes of argument that tail can be defined as a leg.

Likewise, in probability problems we are told things. Can you trust them? In mathematical problems, you do -- you assume they are golden. In real life, it is another matter.

So, I think Ian is saying SUPPOSE we can assume with absolute confidence that, for purposes of argument, Urn B DOES contain 100% Red or Black.
Ian is not saying a tail IS a leg, he is asking what would result IF we knew it was a leg.

As you indicate, what someone tells you in real life may or may not be true. That's why I raised the "trust" issue. We need to distinguish between assumptions to be regarded as absolutely true for purposes of argument, and statements made that may or may not be true.

4. Tom:

I think you misunderstand my point.

Ian did not say 100% red or 100% black, with equal probability. He said 100% red or 100% black (probability undefined).

You can also assume I'm telling the truth in the velociraptor/stuffed toy analogy. I'm unlikely to be holding a velociraptor, but as long as I have a stuffed toy I didn't lie. In that analogy, we might assume that velociraptor is unlikely because we intuit that the prior probability is low. In contrast, the prior probability for red or black balls is assumed to be equal by Ian, but this may not be the case, especially if the urn-preparer wants to trick us.

5. D. Me: Point taken. I generally assume equal probability, but that was not explicitly stated.

6. Your assumption is pretty reasonable to be honest. I was nitpicking!

7. Most of the choices we (as individuals) make are single events - not choices that will be repeated over and over in identical circumstances. Yet probability-talk assumes that some given event, like rolling dice, will be repeated over and over. If so, then long-run frequencies are a good guide as to what the outcome will more often be. But if I'm only going to make a given choice ONCE, then why does what would happen IF I made similar choices often have any relevance at all?

8. >Yet probability-talk assumes that some given event, like rolling dice, will be repeated over and over.

That is true according to the frequentist camp in philosophy of probability, but not according to e.g., the Bayesian camp (to which I am sympathetic). Most of us are perfectly happy to talk about the probabilities of one-off events.

1. Yeah, I use the Kelly criterion when I make bets, although it makes me nervous - there is no risk aversion built into it!

>So, though I haven't worked it out, I suspect the uncertainty would overpower the edge, and the Kelly calculation would advise staying out of the game involving the second urn.

I don't think so. Remember that as you are playing the game, you should be continuously updating your "edge" - aka your estimate of the black/red ratio in Urn B (the easy way to do this is Laplace's rule of succession, mentioned above). So you *start* with a uniform prior that gives P(ball_1_red)=0.5, but if your first 5 balls are black, then by LROS, P(ball_6_red)=(s+1)/(n+2)=(0+1)/(5+2)=1/7.

Translating that into the language of the Kelly criterion, your "edge" should be changing with every draw. Realistically, after drawing even 1 black ball, Urn A becomes the better bet.

2. First of all, I think that Kelly strategy does embed risk, in the sense that placing larger-than-Kelly bets is counterproductive to long-term returns. Even placing Kelly bets can result in very wild swings in bankroll, although expected returns are maximized. Fractional Kelly bets are better (depending on how you quantify "better"), so the Kelly criterion should be seen as an upper bound.

The key in my thinking is that the second urn is akin to a volatile investment like the stock market, and the first is akin to a stable investment like a certificate of deposit. An urn having less than 50% red balls is analogous to a bear market, and an urn having more than 50% red is analogous to a bull market. When you buy stocks, you aren't certain which type of market lies ahead, just as you don't know how many red balls are in the second urn.

The connection to Kelly strategy is that, if I recall correctly, a more volatile outlook encourages betting a smaller fraction of one's bankroll. The generalized form of Kelly strategy is to maximize the expectation of the logarithm of the outcome (instead of the expectation of the outcome), because successive return ratios are multiplied rather than added (e.g., gain 20%, then 20% on top of that, compounds to 44%: 1.2 * 1.2 = 1.44). But the closer a ratio is to zero, the more quickly the logarithm grows in the negative direction (gaining 20% then losing 20% is better than gaining 50% then losing 50%). It's asymmetric. In the Kelly strategy for the second urn, the terms representing fewer red balls (more expected losses) will have a more pronounced effect than the corresponding gain terms (more red balls).

Consequently, for a bet on the second urn to look favorable, higher payoff odds must be offered than for the first urn.

I understand your point about updating the edge as you go. Strictly speaking, you wouldn't apply Kelly strategy to a single bet. However, I don't think that diminishes the fact that the volatility makes urn B a less attractive bet on the first play -- unless the payoff is higher than urn A's. And the problem as stated was about placing a single bet, not evaluating the long-term prospects over repeated bets.

3. On the other hand, given a single bet, there are only two possible outcomes: lose your bet, or win the payoff amount. A choice of urn A or urn B is equally likely to produce a given outcome on a single draw. On reconsideration, my argument only applies to this scenario: You are buying N tickets (N > 1) for either Urn A or Urn B, and the N tickets do not apply to a single draw from that urn, they apply to a consecutive series of draws (with replacement). You must buy all N tickets prior to the first draw.

In this scenario, Urn A is unlikely to produce N red balls in a row, or N black balls in a row. If N is large enough (say, N=20), the probability of either of those events is much less than 1%. But Urn B has at least a 1% chance of producing the all-red outcome, and at least a 1% chance of producing the all-black outcome. So Urn B is more volatile in this scenario.

10. Ian,

> The thing that bugs me about the "it's a con!" answer is that you don't know *in which direction* it's a con - black or red. So the situation is more or less the same as the 50%-50% draw.

the point about "it's a con" is that you don't have to know - sleight of hand will always ensure that it is not in your favor. That's the whole point of a con. In 3-Card-Monte you always lose, no matter why card you pick, and that's why the sum of probabilities is less than 100%.

In other words, the way you seem to think of as situation 3 is not a con, it's a flawed setup. (Different distribution than promised.) Why would a con artist cheat in your favor?

Once you deal with people, it is a rational choice to distrust their motives. A bet with undisclosed odds triggers the BS filter, and if you have ever seen an Apollo Robbins act, you know what sleight of hand can do. (If you haven't, google him, it's amazing.) No, Urn 1 is definitely the rational choice.

Cheers
Christian

1. >the point about "it's a con" is that you don't have to know - sleight of hand will always ensure that it is not in your favor. That's the whole point of a con. In 3-Card-Monte you always lose, no matter why card you pick, and that's why the sum of probabilities is less than 100%.

Good point.

>No, Urn 1 is definitely the rational choice.

In real life, yeah, I think you're right. But if we stipulate away the possibility of cheating, so that in the constructed "logical environment," the ratio of balls (whatever it is) has nothing to do with which bet is being offered to you, then I'm still indifferent between the Urns.

2. I'll check out the Apollo Robbins acts I can find online... looks interesting!

3. > In real life, yeah, I think you're right. But if we stipulate away the possibility of cheating, so that in the constructed "logical environment," the ratio of balls (whatever it is) has nothing to do with which bet is being offered to you, then I'm still indifferent between the Urns.

No argument in the logical environment; and if you got a sample of readers of LW or similar to do the experiment, you would probably get an indifferent result. My point was more that the vast majority of people won't think about it in logical terms, but rather as a real life setting, which is why it's not a paradox. (As opposed to most the experiments Kahneman refers to.)
I'd wager ;-) that even here on RS a majority would choose Urn A...

Chris

P.S. One of the things I sometimes do in the course of my job is design questionnaires. Asking questions on the Ellsberg situation is sth I would try to avoid as far as possible - the answers you get will be extremely sensitive to wording.

11. P.S. Sorry, overread one of your answers. Even if you "told" me - there is a non-zero chance that you are lying, and that you will do a switcheroo after I choose, and the probability that is not in my favor is higher than that it is in my favor.

12. I agree with your lack of preference in Ellsberg's paradox, Ian. I disagree with the commenter who said that it's wrong to assume the odds are the same; that seems like a perfect example of an intelligent bayesian prior.

For the stranger with the toothpaste, I don't think there are any sensible priors to use. Actually, I've got to weaken that statement. Statistically, the odds are greater that toothpaste was chosen twice because it was a more likely choice, more highly represented in the population. A blind prior (only considering results, not considering any real physical and psychological drivers of the situation) would favor toothpaste. A smarter prior, consisting of everything in the universe that could possibly (or even likely) be in the man's bag, would give us too big a field for a meaningful selection. I'm comfortable refusing to justify a choice.

1. >I don't think there are any sensible priors to use.

I agree, none that we know of. Conceivably, the bearded man's best friend might know more about why he's offering such weird bets, and have better priors.

13. In the Ellsberg case you can (arguably) apply a principle of indifference. Even if the urn is biased, you've no reason to think it's more likely to be biased in favour of black than red.

In the octopus case you're probably not indifferent as between all the possible things the stranger may draw out of his bag. Given human nature and the nature of physical reality, some possibilities seem more likely than others. Perhaps more important, there isn't a well-defined set of possibilities over which we can have a probability distribution.

I think Elga is mistaken. Arguing against "the sequence proposal", he argues as if we can make three judgements of rationality in a two-bet scenario: about bet A, about bet B, and about the combined sequence. But Sally is making only two decisions, so there are only two judgements of rationality to be made. Furthermore, if Sally is told about both bets in advance (this is unclear) she is effectively making only one decision. She can decide about both bets at the same time, since nothing will change after the first bet, and we can simply judge the rationality of that one (combined) strategy.

A better version of SEQUENCE (call it SEQUENCE-2) would be this: It is rationally impermissible for Sally to reject both bets if she is told in advance that she will be offered both bets, even though it may be rationally permissible for her to reject each bet taken in isolation. There are three judgements of rationality to be made here, because the individual bets are being judged in isolation, not as part of a sequence. But Elga's argument is not applicable to SEQUENCE-2.

1. Yeah, I agree with your criticism of Elga here.

14. It's interesting how our intuitions change if we slightly alter the scenario. Suppose that the first thing that you pull out from the bearded man's sack is some illegal substance like a packet of methamphetamine. The second is a Jellyfish and the third is smaller packet of methamphetamine. Now, what is your level of credence that the next object pulled out of the bag is a packet of methamphetamine?

Pretty high. The guy is carrying meth. The jellyfish is just for the police to pet.

The only difference here in comparison to the original scenario is, you're doing the pulling instead of the bearded man so you know your motives, and you have some kind of idea in what situations you might find illegal substances and jellyfish in a bearded man's sack while you have no idea in what situations you find toothpaste and jellyfish in a bearded man's sack. So our search space is narrowed down.

15. The tooth paste vs jellyfish bet is fairly interesting,
What will people do if we rephrase the problem as:
"the guy pulls [...] how much will you pay to see the next object?"
In this case it is equivalent to a lost bet in any case but people will probably pay a few buck for sake of curiosity.

If you compare the answer on two sample i) bet sample and ii) curiosity sample, the result will probably me amusing. My guess is that many more people in the second group will be happy to open the wallet even if the economic reward is clearly less.

Posing the problem in form of a bet or in form of price of curiosity will probably shift the attention from mere economic reward to personal satisfaction with the latter generally highly priced than an uncertain bet.

Thank you for commenting on Rationally Speaking. Your post will likely appear in a few minutes, but it may be up to a few hours, please be patient. Comments are moderated in order to filter out insults and commercial advertising, thus improving the reading experience for us all.