tag:blogger.com,1999:blog-15005476.post9032435550274960749..comments2018-08-25T21:24:44.954-04:00Comments on Rationally Speaking: Botanizing ProbabilityUnknownnoreply@blogger.comBlogger38125tag:blogger.com,1999:blog-15005476.post-17827481916334091002013-08-12T10:35:57.340-04:002013-08-12T10:35:57.340-04:00Obviously the con is always in the direction that ...Obviously the con is always in the direction that makes you lose (*)<br /><br />You wrote:<br />"Suppose we keep everything the same, but Tickets A and B pay out if a black ball is drawn. Symmetrically, Ticket A would still be preferred to Ticket B, since Ticket A is less “vague.” But then your inferred probability for “red from Urn B” must be less than “red from Urn A,” while at the same time, “black from Urn B” is less than “black from Urn A.” Since the probabilities from Urn A must sum to 100%, this means your probabilities from Urn B are summing to less than 100%."<br /><br />But this is an error: The scenario where red pays off lies in a different probability space than the one where black pays off, and probabilities from different probability spaces don't have to add up to 1.<br /><br />(* In principle, it is also possible that the con is in your favor. For example if you know that the people running the lottery have an interest in making you win, it would be rational for you to prefer urn B. But barring such prior knowledge, the scenario where the con is against you dominates)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15005476.post-49101674302157161912013-06-18T09:33:11.835-04:002013-06-18T09:33:11.835-04:00The tooth paste vs jellyfish bet is fairly interes...The tooth paste vs jellyfish bet is fairly interesting, <br />What will people do if we rephrase the problem as:<br />"the guy pulls [...] how much will you pay to see the next object?"<br />In this case it is equivalent to a lost bet in any case but people will probably pay a few buck for sake of curiosity.<br /><br />If you compare the answer on two sample i) bet sample and ii) curiosity sample, the result will probably me amusing. My guess is that many more people in the second group will be happy to open the wallet even if the economic reward is clearly less.<br /><br />Posing the problem in form of a bet or in form of price of curiosity will probably shift the attention from mere economic reward to personal satisfaction with the latter generally highly priced than an uncertain bet.<br />Davide Imperatihttps://www.blogger.com/profile/02642162703521371702noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-1186745636346379912013-02-28T05:03:08.503-05:002013-02-28T05:03:08.503-05:00It's interesting how our intuitions change if ...It's interesting how our intuitions change if we slightly alter the scenario. Suppose that the first thing that you pull out from the bearded man's sack is some illegal substance like a packet of methamphetamine. The second is a Jellyfish and the third is smaller packet of methamphetamine. Now, what is your level of credence that the next object pulled out of the bag is a packet of methamphetamine? <br /><br />Pretty high. The guy is carrying meth. The jellyfish is just for the police to pet.<br /><br />The only difference here in comparison to the original scenario is, you're doing the pulling instead of the bearded man so you know your motives, and you have some kind of idea in what situations you might find illegal substances and jellyfish in a bearded man's sack while you have no idea in what situations you find toothpaste and jellyfish in a bearded man's sack. So our search space is narrowed down.<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15005476.post-52931144134346230932013-02-27T20:03:55.805-05:002013-02-27T20:03:55.805-05:00On the other hand, given a single bet, there are o...On the other hand, given a single bet, there are only two possible outcomes: lose your bet, or win the payoff amount. A choice of urn A or urn B is equally likely to produce a given outcome on a single draw. On reconsideration, my argument only applies to this scenario: You are buying N tickets (N > 1) for either Urn A or Urn B, and the N tickets do not apply to a single draw from that urn, they apply to a consecutive series of draws (with replacement). You must buy all N tickets prior to the first draw.<br /><br />In this scenario, Urn A is unlikely to produce N red balls in a row, or N black balls in a row. If N is large enough (say, N=20), the probability of either of those events is much less than 1%. But Urn B has <b>at least</b> a 1% chance of producing the all-red outcome, and <b>at least</b> a 1% chance of producing the all-black outcome. So Urn B is more volatile in this scenario.<br />Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-10127947545523864602013-02-27T04:41:34.381-05:002013-02-27T04:41:34.381-05:00> In real life, yeah, I think you're right....> In real life, yeah, I think you're right. But if we stipulate away the possibility of cheating, so that in the constructed "logical environment," the ratio of balls (whatever it is) has nothing to do with which bet is being offered to you, then I'm still indifferent between the Urns.<br /><br />No argument in the logical environment; and if you got a sample of readers of LW or similar to do the experiment, you would probably get an indifferent result. My point was more that the vast majority of people won't think about it in logical terms, but rather as a real life setting, which is why it's not a paradox. (As opposed to most the experiments Kahneman refers to.) <br />I'd wager ;-) that even here on RS a majority would choose Urn A...<br /><br />Chris<br /><br />P.S. One of the things I sometimes do in the course of my job is design questionnaires. Asking questions on the Ellsberg situation is sth I would try to avoid as far as possible - the answers you get will be extremely sensitive to wording.chbieckhttps://www.blogger.com/profile/11038854944875543524noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-35097104265059279032013-02-27T01:20:20.230-05:002013-02-27T01:20:20.230-05:00First of all, I think that Kelly strategy does emb...First of all, I think that Kelly strategy does embed risk, in the sense that placing larger-than-Kelly bets is counterproductive to long-term returns. Even placing Kelly bets can result in very wild swings in bankroll, although expected returns are maximized. Fractional Kelly bets are better (depending on how you quantify "better"), so the Kelly criterion should be seen as an upper bound.<br /><br />The key in my thinking is that the second urn is akin to a volatile investment like the stock market, and the first is akin to a stable investment like a certificate of deposit. An urn having less than 50% red balls is analogous to a bear market, and an urn having more than 50% red is analogous to a bull market. When you buy stocks, you aren't certain which type of market lies ahead, just as you don't know how many red balls are in the second urn. <br /><br />The connection to Kelly strategy is that, if I recall correctly, a more volatile outlook encourages betting a smaller fraction of one's bankroll. The generalized form of Kelly strategy is to maximize the expectation of the logarithm of the outcome (instead of the expectation of the outcome), because successive return ratios are multiplied rather than added (e.g., gain 20%, then 20% on top of that, compounds to 44%: 1.2 * 1.2 = 1.44). But the closer a ratio is to zero, the more quickly the logarithm grows in the negative direction (gaining 20% then losing 20% is better than gaining 50% then losing 50%). It's asymmetric. In the Kelly strategy for the second urn, the terms representing fewer red balls (more expected losses) will have a more pronounced effect than the corresponding gain terms (more red balls).<br /><br />Consequently, for a bet on the second urn to look favorable, higher payoff odds must be offered than for the first urn.<br /><br />I understand your point about updating the edge as you go. Strictly speaking, you wouldn't apply Kelly strategy to a single bet. However, I don't think that diminishes the fact that the volatility makes urn B a less attractive bet on the first play -- unless the payoff is higher than urn A's. And the problem as stated was about placing a single bet, not evaluating the long-term prospects over repeated bets.Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-46175059657071587862013-02-26T18:43:02.199-05:002013-02-26T18:43:02.199-05:00I'll check out the Apollo Robbins acts I can f...I'll check out the Apollo Robbins acts I can find online... looks interesting!ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-70282860579458621332013-02-26T18:35:43.387-05:002013-02-26T18:35:43.387-05:00Yeah, I agree with your criticism of Elga here.Yeah, I agree with your criticism of Elga here.ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-30027411091673970342013-02-26T18:34:03.658-05:002013-02-26T18:34:03.658-05:00>I don't think there are any sensible prior...>I don't think there are any sensible priors to use. <br /><br />I agree, none that we know of. Conceivably, the bearded man's best friend might know more about why he's offering such weird bets, and have better priors.ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-73573964950112655662013-02-26T18:30:30.618-05:002013-02-26T18:30:30.618-05:00>the point about "it's a con" is ...>the point about "it's a con" is that you don't have to know - sleight of hand will always ensure that it is not in your favor. That's the whole point of a con. In 3-Card-Monte you always lose, no matter why card you pick, and that's why the sum of probabilities is less than 100%. <br /><br />Good point.<br /><br />>No, Urn 1 is definitely the rational choice.<br /><br />In real life, yeah, I think you're right. But if we stipulate away the possibility of cheating, so that in the constructed "logical environment," the ratio of balls (whatever it is) has nothing to do with which bet is being offered to you, then I'm still indifferent between the Urns.ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-86082553658906045902013-02-26T18:27:03.769-05:002013-02-26T18:27:03.769-05:00Yeah, I use the Kelly criterion when I make bets, ...Yeah, I use the Kelly criterion when I make bets, although it makes me nervous - there is no risk aversion built into it!<br /><br />>So, though I haven't worked it out, I suspect the uncertainty would overpower the edge, and the Kelly calculation would advise staying out of the game involving the second urn.<br /><br />I don't think so. Remember that as you are playing the game, you should be continuously updating your "edge" - aka your estimate of the black/red ratio in Urn B (the easy way to do this is Laplace's rule of succession, mentioned above). So you *start* with a uniform prior that gives P(ball_1_red)=0.5, but if your first 5 balls are black, then by LROS, P(ball_6_red)=(s+1)/(n+2)=(0+1)/(5+2)=1/7.<br /><br />Translating that into the language of the Kelly criterion, your "edge" should be changing with every draw. Realistically, after drawing even 1 black ball, Urn A becomes the better bet.ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-46724675045380261262013-02-26T18:14:53.162-05:002013-02-26T18:14:53.162-05:00I think this is all in "Thinking Fast and Slo...I think this is all in "Thinking Fast and Slow," although I'm sure there are other papers containing it as well.ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-74844211952403268222013-02-26T15:46:47.989-05:002013-02-26T15:46:47.989-05:00Your assumption is pretty reasonable to be honest....Your assumption is pretty reasonable to be honest. I was nitpicking!Disagreeable Mehttps://www.blogger.com/profile/15258557849869963650noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-33964140465979414072013-02-26T13:52:21.352-05:002013-02-26T13:52:21.352-05:00D. Me: Point taken. I generally assume equal prob...D. Me: Point taken. I generally assume equal probability, but that was not explicitly stated. Tom D.https://www.blogger.com/profile/16005219519644708237noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-72761107606195156952013-02-26T12:30:43.399-05:002013-02-26T12:30:43.399-05:00In the Ellsberg case you can (arguably) apply a pr...In the Ellsberg case you can (arguably) apply a principle of indifference. Even if the urn is biased, you've no reason to think it's more likely to be biased in favour of black than red.<br /><br />In the octopus case you're probably not indifferent as between all the possible things the stranger may draw out of his bag. Given human nature and the nature of physical reality, some possibilities seem more likely than others. Perhaps more important, there isn't a well-defined set of possibilities over which we can have a probability distribution.<br /><br />I think Elga is mistaken. Arguing against "the sequence proposal", he argues as if we can make three judgements of rationality in a two-bet scenario: about bet A, about bet B, and about the combined sequence. But Sally is making only two decisions, so there are only two judgements of rationality to be made. Furthermore, if Sally is told about both bets in advance (this is unclear) she is effectively making only one decision. She can decide about both bets at the same time, since nothing will change after the first bet, and we can simply judge the rationality of that one (combined) strategy.<br /><br />A better version of SEQUENCE (call it SEQUENCE-2) would be this: It is rationally impermissible for Sally to reject both bets if she is told in advance that she will be offered both bets, even though it may be rationally permissible for her to reject each bet taken in isolation. There are three judgements of rationality to be made here, because the individual bets are being judged in isolation, not as part of a sequence. But Elga's argument is not applicable to SEQUENCE-2.Richard Weinhttps://www.blogger.com/profile/18095903892283146064noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-50716313325238195682013-02-26T11:32:26.076-05:002013-02-26T11:32:26.076-05:00Tom:
I think you misunderstand my point.
Ian did...Tom:<br /><br />I think you misunderstand my point.<br /><br />Ian did not say 100% red or 100% black, with equal probability. He said 100% red or 100% black (probability undefined).<br /><br />You can also assume I'm telling the truth in the velociraptor/stuffed toy analogy. I'm unlikely to be holding a velociraptor, but as long as I have a stuffed toy I didn't lie. In that analogy, we might assume that velociraptor is unlikely because we intuit that the prior probability is low. In contrast, the prior probability for red or black balls is assumed to be equal by Ian, but this may not be the case, especially if the urn-preparer wants to trick us.Disagreeable Mehttps://www.blogger.com/profile/15258557849869963650noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-36458011004758414842013-02-26T10:58:55.211-05:002013-02-26T10:58:55.211-05:00@ D. Me:
"If you call a tail a leg, how many...@ D. Me:<br /><br />"If you call a tail a leg, how many legs has a dog? Five? No, calling a tail a leg don't make it a leg." -- Abraham Lincoln<br /><br />I never liked that quote from L. because we are ASSUMING for purposes of argument that tail can be defined as a leg.<br /><br />Likewise, in probability problems we are told things. Can you trust them? In mathematical problems, you do -- you assume they are golden. In real life, it is another matter. <br /><br />So, I think Ian is saying SUPPOSE we can assume with absolute confidence that, for purposes of argument, Urn B DOES contain 100% Red or Black. <br />Ian is not saying a tail IS a leg, he is asking what would result IF we knew it was a leg. <br /><br />As you indicate, what someone tells you in real life may or may not be true. That's why I raised the "trust" issue. We need to distinguish between assumptions to be regarded as absolutely true for purposes of argument, and statements made that may or may not be true. <br /> Tom D.https://www.blogger.com/profile/16005219519644708237noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-20401477491715502382013-02-26T10:31:58.504-05:002013-02-26T10:31:58.504-05:00I agree with your lack of preference in Ellsberg&#...I agree with your lack of preference in Ellsberg's paradox, Ian. I disagree with the commenter who said that it's wrong to assume the odds are the same; that seems like a perfect example of an intelligent bayesian prior.<br /><br />For the stranger with the toothpaste, I don't think there are any sensible priors to use. Actually, I've got to weaken that statement. Statistically, the odds are greater that toothpaste was chosen twice because it was a more likely choice, more highly represented in the population. A blind prior (only considering results, not considering any real physical and psychological drivers of the situation) would favor toothpaste. A smarter prior, consisting of everything in the universe that could possibly (or even likely) be in the man's bag, would give us too big a field for a meaningful selection. I'm comfortable refusing to justify a choice.BubbaRichhttps://www.blogger.com/profile/10334093723773620510noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-73934428160387243422013-02-26T05:45:09.601-05:002013-02-26T05:45:09.601-05:00>I outright *told* you that Urn B was filled wi...>I outright *told* you that Urn B was filled with either 100% black or 100% red. It still seems to me that Ticket A is exactly as good as Ticket B.<<br /><br />My point is that without knowing the underlying probabilities of either alternative, you are not justified in assuming that either is equally likely. It may be that black balls are more common than red ones or that the distribution is skewed for other reasons.<br /><br />Suppose I outright told you that I was holding behind my back either a baby velociraptor or a stuffed toy, and offered to let you bet me that it was the velociraptor?<br /><br />[3] is a special case of [2] because we don't have the option to choose whether we bet on red or black. Instead we are offered the chance to bet on a red ball. Since by [2] we don't know the underlying rules or probabilities, it is likely that the urn is much more likely to contain black balls because that would be of benefit to the party who set up the bet [3].Disagreeable Mehttps://www.blogger.com/profile/15258557849869963650noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-83165554518786578862013-02-26T05:04:29.181-05:002013-02-26T05:04:29.181-05:00P.S. Sorry, overread one of your answers. Even if ...P.S. Sorry, overread one of your answers. Even if you "told" me - there is a non-zero chance that you are lying, and that you will do a switcheroo after I choose, and the probability that is not in my favor is higher than that it is in my favor.chbieckhttps://www.blogger.com/profile/11038854944875543524noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-57184240991485794062013-02-26T05:00:48.509-05:002013-02-26T05:00:48.509-05:00Ian,
> The thing that bugs me about the "...Ian,<br /><br />> The thing that bugs me about the "it's a con!" answer is that you don't know *in which direction* it's a con - black or red. So the situation is more or less the same as the 50%-50% draw.<br /><br />the point about "it's a con" is that you don't have to know - sleight of hand will always ensure that it is not in your favor. That's the whole point of a con. In 3-Card-Monte you always lose, no matter why card you pick, and that's why the sum of probabilities is less than 100%. <br /><br />In other words, the way you seem to think of as situation 3 is not a con, it's a flawed setup. (Different distribution than promised.) Why would a con artist cheat in your favor? <br /><br />Once you deal with people, it is a rational choice to distrust their motives. A bet with undisclosed odds triggers the BS filter, and if you have ever seen an Apollo Robbins act, you know what sleight of hand can do. (If you haven't, google him, it's amazing.) No, Urn 1 is definitely the rational choice.<br /><br />Cheers<br />Christianchbieckhttps://www.blogger.com/profile/11038854944875543524noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-51061477444452241642013-02-25T23:48:43.799-05:002013-02-25T23:48:43.799-05:00I'm not sure I agree about the Ellsberg "...I'm not sure I agree about the Ellsberg "paradox" being a paradox. The typical behavior might turn out to be rational. In 1956, J.L. Kelly developed the <a href="http://en.wikipedia.org/wiki/Kelly_criterion" rel="nofollow">Kelly criterion</a>, which provided a way to calculate the percentage of your bankroll that you should bet to maximize return in the long run without going bankrupt (it assumes infinitely divisible currency, as a technicality). Say the tickets were selling for $49 each, which gives you a $1 edge (if the edge is 0, there is no point in playing, and if the edge is negative, you should take the other side if that choice is available -- buying put options, for example, if you translate the problem to the stock market). And say you can buy not just one, but as many tickets as you wish. The Kelly criterion would tell you how many tickets to buy, based on how much money you have, for the first urn. Play that same percentage of your bankroll (adjusting the amount of the bet up or down as your bankroll rises and falls) will give you a maximum return over multiple plays. For the second urn, however, if the urn turns out to have more black than red balls, and if you play the game long enough, your bankroll will experience an average downward trend. So, though I haven't worked it out, I suspect the uncertainty would overpower the edge, and the Kelly calculation would advise staying out of the game involving the second urn.Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-8486342003359393892013-02-25T22:37:05.761-05:002013-02-25T22:37:05.761-05:00Unless the bet is absolute.
Probability has lead ...Unless the bet is absolute.<br /><br />Probability has lead science terribly astray.<br />They are so lost in their own self-made uncertainty that only the light of truth will show them the Way. But where is that truth philosophy, have you found it yet?<br />Mankind has been waiting since Socrates and the boys, over 2000 years.<br />Truth anyone?<br /><br />Its time to straighten out not only science but also the grey area of justice, philosophy, as well as the multitudes of the faithful. <br /><br />Truth or absolute is much more simple than thought,<br />Perhaps One day you will allow me to show you the Way.<br />I'll bet the Universe on it!<br /><br />== MJAhttps://www.blogger.com/profile/01897595473268353450noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-9494863720882944642013-02-25T21:54:36.958-05:002013-02-25T21:54:36.958-05:00>Yet probability-talk assumes that some given e...>Yet probability-talk assumes that some given event, like rolling dice, will be repeated over and over.<br /><br />That is true according to the frequentist camp in philosophy of probability, but not according to e.g., the Bayesian camp (to which I am sympathetic). Most of us are perfectly happy to talk about the probabilities of one-off events.ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-75887737235309015392013-02-25T20:21:24.646-05:002013-02-25T20:21:24.646-05:00Most of the choices we (as individuals) make are s...Most of the choices we (as individuals) make are single events - not choices that will be repeated over and over in identical circumstances. Yet probability-talk assumes that some given event, like rolling dice, will be repeated over and over. If so, then long-run frequencies are a good guide as to what the outcome will more often be. But if I'm only going to make a given choice ONCE, then why does what would happen IF I made similar choices often have any relevance at all?Phiwillihttps://www.blogger.com/profile/05434702023421961210noreply@blogger.com