tag:blogger.com,1999:blog-15005476.post2040458288521179341..comments2023-10-10T08:02:18.073-04:00Comments on Rationally Speaking: Newcomb’s Paradox: An Argument for IrrationalityUnknownnoreply@blogger.comBlogger74125tag:blogger.com,1999:blog-15005476.post-14164316073656718652013-02-04T09:55:09.986-05:002013-02-04T09:55:09.986-05:00You can try to identify the Nash Equilibrium of th...You can try to identify the Nash Equilibrium of the game. The Nash Equilibrium is such outcome of the game where each player would be worse off (or at least no better off) if he changed his decision. <br /><br />In this case, you choosing both boxes and the computer choosing not putting anything into the closed box is the Nash Equilibrium of the game since had you chosen just the closed box you'd have gotten nothing and had the computer chosen to put the million into the closed box, it would have failed the prediction. It is the only Nash Equilibrium of the setup so this would be the rational outcome.<br /><br />However, you could rig the game in your favor: suppose you gave $1001 to your local church with a contract saying - you can keep this money if and only if I choose both boxes in the game. Then the computer would know you would lose by choosing both boxes (as it should know you wouldn't like to donate to a church) and thus would predict you would choose just the closed box - which is exactly what you would do, netting $1.000.000stenlishttps://www.blogger.com/profile/09636450792564014547noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-78130023992752167352010-10-29T00:38:44.034-04:002010-10-29T00:38:44.034-04:00I think most of the posts here just show that they...I think most of the posts here just show that they will never get the million because they have rationally concluded that statistically taking the two boxes is the best option. The computer has already figured out they are the type of person who would do this.<br /><br />The only type of person that has a real chance of getting the million is the sort of person who makes whimsically random choices without any rational basis. Now the computer is the one with a ~50% chance of giving away the million.Ian Pulsfordhttps://www.blogger.com/profile/15459719457255747310noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-53025961931686102132010-08-19T16:10:36.164-04:002010-08-19T16:10:36.164-04:00I'm working on a project to test responses to ...I'm working on a project to test responses to this problem. If you have an opinion and wouldn't mind, stop by<br />http://www.surveymonkey.com/s/YQ3LSMM<br />and tell me what you think. It's only 4 questions long!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15005476.post-70220661166461879932010-08-19T01:10:13.326-04:002010-08-19T01:10:13.326-04:00it occurs to me it may be useful as a psycho analy...it occurs to me it may be useful as a psycho analytical tool. perhaps correlated to an intuitive imaginative sense or somesuch.<br />this has occurred to me through Marios' work relating to the disconnected fields of philosophy and science.<br /><br />where perhaps a person of more philosophical nature would assume the one box, and the more scientific the two boxes. In this case, the application of reasoning within differing personal contexts.<br /><br /><br />Another thought is that it is apparently easier to accept the concept of someone giving you a box of $$$, than to accept a near perfect predicitive computer.<br />Hence obviously the two boxes.<br /><br />It seems I consider the two boxes the restrictive/idiot side of reasoning :)<br /><br /><br />i amusingly thought of another parrallel example.<br /><br />the open box contains a life of self serving pleasure.<br />the closed box contains an afterlife of nothing or bliss.<br />there is a God that controls the content of the closed box dependant on whether you choose both boxes or just the closed box.<br /><br />the imperfect choices by God may require selling your soul :)edward kellyhttps://www.blogger.com/profile/12968424747852837987noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-275349341631896252010-08-18T17:39:51.576-04:002010-08-18T17:39:51.576-04:00Newcomb's Paradox is not a paradox, it is a f...Newcomb's Paradox is not a paradox, it is a fallacy. The two lines of reasoning are:<br /><br />Two-Boxer:<br /> 1) The $1M is either in box B or it is not.<br /> 2) How I choose cannot change 1)<br /> 3) Choose both, as I can at least get the $1k and I can possibly get the $1M on the off-chance that the computer was wrong.<br /><br />One-Boxer:<br /> 1) The computer has near-perfect prediction skills<br /> 2) My choice has been predicted to near-perfect accuracy.<br /> 3) Choose B, as predicted, and get the $1M. Get nothing on the off-chance that the computer was wrong.<br /><br />The problem with the two-boxer reasoning is that it involves an implicit assumption that there is an equal probability of the $1M being in the box v. not in the box. But the statement of the computer's ability makes it clear that the probability of correct prediction is much higher. If you run through the expected outcomes, it is only best to choose two boxes if the predictor's success rate is less than 50%. Why on earth do you think the success rate is less than 50% when the very statement of the situation is that it's success rate is "near-perfect?"DFBhttps://www.blogger.com/profile/16624603521338516156noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-87634735746632542382010-08-17T12:10:01.287-04:002010-08-17T12:10:01.287-04:00RichardW, yes, the conflict between causality and ...RichardW, yes, the conflict between causality and correlation is at the heart of this problem. The two rules pretty much always recommend the same course of action, except here, which is why it's so disconcerting (at least to me! and lots of other people!).<br /><br />To everyone: I think I should've picked a different example than the fisherman one, because it unnecessarily incorporated a selfish-vs.-altruistic tradeoff when really all I wanted to isolate was the suboptimal outcome that results from being unable to credibly pre-commit.<br /><br />A friend of mine suggested this scenario instead: you're being held hostage, and the kidnapper would be willing to let you go if he could be certain you weren't going to turn him in to the police once you're free. You can't credibly make that promise, because you know that once you ARE free you have no reason to keep it. And he knows you know that. So you end up worse off. <br /><br />Finally: if there's one lesson I learned from this post and its comments, it is not to assume ahead of time that everyone else will be using my same definition of "rational"! Duly noted, all. <br /><br />Earlier in this comment thread I defined rational action as self-interest maximizing, but in retrospect I think I was being unnecessarily restrictive; I should just have said that it's maximizing whatever you happen to value. So yes, if you value promise-keeping more than money then it's not irrational to keep your promise. That's why it was a confusing choice of example on my part; I should've used the hostage example.<br /><br />But the part of rational decisionmaking that I think really IS central to the definition is that you're picking the action that will lead to the best outcome <i>conditional on</i> everything outside your control at that point in time. Which is why it feels paradoxical to many people to imagine standing in front of those two boxes, knowing that their contents are outside your control now that they're already prepared, and that two-boxing seems like the best choice conditional on whatever is currently in the closed box.Julia Galefhttps://www.blogger.com/profile/05020069129381463375noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-41273423633593924422010-08-17T06:34:51.987-04:002010-08-17T06:34:51.987-04:00this is great,
i started thinkn about it being si...this is great,<br /><br />i started thinkn about it being simplified into two options a or b.<br />a gives you $1 000<br />b gives you $1 000 000<br /><br />it seems clearer then :)edward kellyhttps://www.blogger.com/profile/12968424747852837987noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-11139040757672260272010-08-16T22:31:38.815-04:002010-08-16T22:31:38.815-04:001. Funny how your "real-life" example in...1. Funny how your "real-life" example involves being stuck on a deserted island. Happens to me all the time.<br /><br />2. I'm surprised that no one has mentioned that the fisherman is a total jackass. $1,000 to save your life?!? Club him with an oar and throw him overboard! (Then offer to save him if he pays you $1,000.)<br /><br />3. Also, count me in with the one-boxers. If this is a thought experiment, then the computer is magic, and it can do what it says. Don't f*** with it. In real life, two boxes all the way. Especially if Microsoft had anything to do with it.dwaynehttps://www.blogger.com/profile/17568249368649594391noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-44161426334664930852010-08-16T13:26:09.566-04:002010-08-16T13:26:09.566-04:00Judging by the responses, the "paradox" ...Judging by the responses, the "paradox" appears to be the conflict between the question that's asked involving a magic computer and another, different question that the reader is making up on the fly without the magic computer.<br /><br />That's not a paradox, that's just a poor analysis.<br /><br /><br />To those people who imagine the computer can't read thoughts and want to answer a different question: you aren't going far enough! If the computer isn't perfect, you MUST state how accurate you imagine the computer really is.<br /><br />If your new computer is even 51% accurate, taking both boxes is a LOSING strategy. So by all means, invent your own question but tell us what new assumptions you're using. If you say "it's impossible", you may be right about that point but even with revised abilities, your analysis still fails. Taking just the closed box can still be the best strategy!Adrianhttps://www.blogger.com/profile/08694840174170043470noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-62733045811559004022010-08-16T13:04:36.286-04:002010-08-16T13:04:36.286-04:00Computers are programed by humans, not Gods. As a ...Computers are programed by humans, not Gods. As a programmer I would always take both boxes, it's a no brainer really. I do not feel a need to make a particularly compelling argument OneDayMore quickly gets to the meat of the problem.RixiMhttps://www.blogger.com/profile/14621991007781627497noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-17544305693212151172010-08-16T07:24:04.585-04:002010-08-16T07:24:04.585-04:00P.S.
1. The one-box argument I gave above isn'...P.S.<br />1. The one-box argument I gave above isn't what we would usually recognise as an argument. It merely observes a correlation. It doesn't draw any conclusion from it. It's drawing conclusions that's problematic here. Nevertheless, I believe the mere observation of this correlation would incline me to one-box. If we saw lots of other people playing the game, with all the one-boxers winning big bucks and all the two-boxers failing to do so, I'm pretty sure almost everyone would start one-boxing.<br /><br />2. The two-box argument faces up to the challenge of making what we would recognise as an argument. But the argument is a failure.<br /><br />I think it's been said that there are seemingly good arguments for both alternatives. My conclusion is that there isn't a good argument for either alternative. But, if we put aside arguments and just allow our instincts to take over, I think our basic inclination to follow observed correlations will lead us to one-box, especially once we've seen the success of previous one-boxing.Richard Weinhttps://www.blogger.com/profile/18095903892283146064noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-87910556418527596242010-08-16T06:56:12.847-04:002010-08-16T06:56:12.847-04:00It seems to me that the difference between the one...It seems to me that the difference between the one-box and two-box arguments depends on a distinction between causality and correlation.<br /><br /> - The two-boxer says that the contents of the box are already determined, so my choice can have no causal effect on the contents of the box, and therefore they'll be the same whatever I do.<br /><br /> - The one-boxer just looks at correlation. There are only two possibilities:<br />A. The closed box contains $1M and I take that one box.<br />B. The closed box is empty and I take both boxes.<br />So I will only have the $1M if I one-box. One-boxing doesn't _cause_ me to get the $1M. The events are merely correlated.<br /><br />I think the two-boxer is saying something either wrong or meaningless when she says the box contents will be the same "whatever I do" or "either way I'm better off taking both boxes" (as Julia put it). Which two "ways" is Julia thinking of? There are only two possible sequences of events (A and B above), and she's better off with sequence A (one-box). If she two-boxes she _will_ be worse off (though that's not caused by her two-boxing).<br /><br />I think the two-boxer is following a habitual causal way of thinking which is usually effective, but leads her to make incorrect or meaningless statements in this case.Richard Weinhttps://www.blogger.com/profile/18095903892283146064noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-82338543551065002782010-08-16T01:23:27.974-04:002010-08-16T01:23:27.974-04:00I think the decision hinges on your belief that a ...I think the decision hinges on your belief that a computer can really predict people's choices. When I heard it said there is a computer that can predict what choice specific people will make, on some level I responded, "Yeah, whatever." People are used to being told BS on a regular basis and I think you sometimes make an immediate gut level acceptance or rejection of information. For me, the ability of the computer to predict people's choices sounded implausible. I think this made me immediately assume the two boxes was the best choice (in addition to the fact that the choice had already been made). However, if I really believed the computer could actually predict what I would do then I think I would have chosen just the closed box. The computer sounded a little too magic to me and so didn't fit in my worldview.Danhttps://www.blogger.com/profile/04160604576175990680noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-64061189314899537172010-08-15T18:16:43.648-04:002010-08-15T18:16:43.648-04:00Sorry, my last post was grammatically bad at the e...Sorry, my last post was grammatically bad at the end.<br /><br />Julia's hints at why the two box option is worse, but she still thinks that her initial reaction makes sense and thus the problem is a paradox, she says:<br /><br />"Regardless of what the computer predicted I'd do, it's a done deal, and either way I'm better off taking both boxes. Those silly one-boxers must be succumbing to some kind of magical thinking, I figured, imagining that their decision now can affect the outcome of a decision that happened in the past."<br /><br />That line of reasoning does not begin to make sense to me. I assumed (and still do) that the computer is predicting our reasoning processes all the way through the final choice being made, that such predicting ability is what makes this computer special. If that is the case, and our final decision, made after we learn the ability of this computer, will be to take the one box, then there will be a million dollars in it. Why would we take the two boxes if we have faith in the predicting power of the computer, which was given in the setup of the problem?<br /><br />This paradox is similar to Kavka's toxin paradox, which is also interesting and illuminating about intentionality. In the toxin puzzle I accept the viewpoint that one has to not only form an intention to drink the toxin, but must also carry out the drinking of the toxin in order to gain. Newcombe's problem seems the same, but without any great harm in simply taking the one box and maximizing your reward.Lyndon Pagehttps://www.blogger.com/profile/02628514330681989555noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-27628707156689748582010-08-15T15:22:12.399-04:002010-08-15T15:22:12.399-04:00ON DETERMINISM (which I don't think is the poi...ON DETERMINISM (which I don't think is the point of Newcomb's Paradox, but I think is its flaw. It's supposed to be about rationality, but it's really about our religious beliefs in determinism)<br /><br />Julia, both you and Massimo seem much more comfortable with Determinism than I think a rational skeptic aught to be. If not identical arguments, than at least very very similar once can be brought to bear on Intelligent Design and the Anthropic Principle as can be brought against Determinism. Why do you employ the former but not the latter?<br /><br />William Egginton's "code of codes" argument is compelling:<br /><br />"The reason for this is that when we assume a deterministic universe of any kind we are implicitly importing into our thinking the code of codes, a model of reality that is not only false, but also logically impossible."<br /><br />Now, allow me to stoop to a thought experiment of my own.<br /><br />ONEDAYMORE'S PARADOX<br />There is a computer that can know what Galen Strawson calls "the way you are." It knows exactly what choice you will make in whatever box opening game you want to contrive. The computer then tells you to do the opposite of the choice you would have made. It's so good at understanding you, that it knows exactly how to incentivize you to make the non predetermined choice. Has the computer defeated determinism? How is this computer not God? How is this computer different from the one Newcomb proposes?Aaron Shurehttps://www.blogger.com/profile/00837439765332783167noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-66935506980517462622010-08-15T14:15:56.398-04:002010-08-15T14:15:56.398-04:00Dan: "I'd like to enjoy this discussion, ...Dan: "<i>I'd like to enjoy this discussion, so please don't make it personal.</i>"<br /><br />I am sorry if you were offended by my previous post. I was not attempting to make it personal; I was attempting to make a point. Evidently, I did not succeed in that respect.<br /><br />Dan: "<i>There need not be anything irrational about giving a probabilistic answer. Is that something we can agree on?</i>"<br /><br />Determining the actual probabilities of a given situation is a rational endeavor. That is the only thing I will concede.<br /><br />Incidentally, I am drawing a distinction between "nonrational" and "irrational." (Rationality itself requires an element of nonrationality in order to function fully. Or, in other words, the <i>analytical</i> mind is lame without the services of the <i>intuitive</i> mind.)<br /><br />Dan: "<i>Or is there a sticking point about the meaning of "rational"? If so, you won't like what I write. But assuming it's not, I'll proceed.</i>"<br /><br />You are correct. We have a basic disagreement on the meaning of <i>rationality</i> and <i>randomness</i>. A <i>random</i> choice is <i>only</i> rational to the extent that a certain situation may dictate that some kind of choice be made (I trust that you can think of many examples where this would indeed be the case). However, a random choice is <i>not</i> rational to the extent that the <i>actual</i> choice that is made is based on randomness. (There is no rhyme or reason for randomness. Indeed, this fact is typically employed by determinists to discredit "free will." )<br /><br />Dan: "<i>OK, but what about the game? In this particular case, I can even use an algorithm that always returns the same value: Heads, say.</i>"<br /><br />You could do that. But there is no rational reason why you should favor "heads" over "tails" except that making some kind of choice is the rational thing to do. Again, I am simply repeating myself. The counterargument remains the same. Making some kind of choice is rational; the actual choice made is not.<br /><br />Below is an actual line of code in a computer program (written in Perl) which uses a <i>random built-in function</i> to "randomly" pick a number between "1" and "10." However, this "choice" is completely predetermined because the random built-in function is based on the internal clock of the computer. It only gives the <i>appearance</i> of randomness. But the point I am seeking to make here is that the illusion of "free will" (or an element of nonrationality) is required in order to function rationally.<br /><br /><i>$PickANumber = int(rand(10)) + 1;</i><br /><br />Dan: "<i>Although this is not the definition of your game, in reality I can always break the tie by finding some evidence that shows that, for instance, Heads is more likely in these situations. By defining the game the way you did, you forbade this line of reasoning.</i>"<br /><br />You are right. This was not in the rules. Besides, it would actually be irrational to engage in such analysis because you only have a matter of seconds to make the choice. By attemopting to over-analyze the situation, you are guaranteed to walk away with nothing.<br /><br />Dan: "<i>I guess I'm still unsure what your point is. That there are situations when there are two equally good answers? Or perhaps that there are many such uninformative questions in daily life that we, thinking the question is important, answer truly irrationally? That's something I can believe.</i>"<br /><br />The point is that we are forced to employ an element of <i>nonrationality</i> in order to function in life.Paisleyhttps://www.blogger.com/profile/15090734283426391023noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-70386312644197951152010-08-15T13:35:55.851-04:002010-08-15T13:35:55.851-04:00Following those above, I agree this does not seem ...Following those above, I agree this does not seem like a paradox to me. <br /><br />To clarify on the setup, part of the computer's analysis is knowing that you are going to know about the computer before you make your decision, that that was calculated in the setup of the boxes. In other words, the computer has an omniscient-like power about events going forward, included the chooser's process of reasoning.<br /><br />If that is right, then the answer is one box and it is not a paradox, as others said. "The box was sealed" before I start my thinking processes about how many boxes to take, but that sealing happened with the omniscient-like power (as given in the problem) of the computer knowing everything going forward. Knowing that the computer would have foreseen my reasoning processes, once I get the information that the computer is omniscient-like, I just take the one box, the computer would have foreseen this, and there is a million dollars in the box.<br /><br />It is simply the free will/determinism dilemma, within a fully deterministic world no matter how much I flail about, no matter how much I try to step out of the deterministic chain by doing something that seems outside of "normal behavior," I am always just acting within I may do something that seems "random" or non-deterministic, but if my "reasoning processes" are part of the determined chain, then I never step outside of such a determined world, even if I run and jump off of a cliff just to prove that I have (libertarian) "free will."<br /><br />If somebody really thinks Julia's original setup is a paradox or if I am miss interpreting it, I would like to hear more information of why, since I am apparently missing something.Lyndon Pagehttps://www.blogger.com/profile/02628514330681989555noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-27807621202855431532010-08-15T05:43:31.528-04:002010-08-15T05:43:31.528-04:00Julia,
The flaw is in the free-will illusion that...Julia,<br /><br />The flaw is in the free-will illusion that even if there's money in the closed box, you can still decide to take both boxes. But taking both boxes goes hand-in-hand with the closed box being empty from the outset.<br /><br />Now, if you were allowed to peek inside the closed box, THEN you could violate your prediction, which is why this set-up is unsound.Maxhttps://www.blogger.com/profile/12483245818327188536noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-8951193978830001552010-08-15T05:35:17.936-04:002010-08-15T05:35:17.936-04:00P.S. I'd like to clarify. I wrote: "I jus...P.S. I'd like to clarify. I wrote: <i>"I just say that people who take one box end up with the outcome they prefer, unless they prefer labelling their choice 'rational' to having $1M. Personally, I'd prefer $1M."</i><br /><br />I didn't mean to imply that two-boxing _is_ rational. I don't think we can usefully say that either course of action is rational. This scenario undermines our notion of rationality.Richard Weinhttps://www.blogger.com/profile/18095903892283146064noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-75613798669334020602010-08-15T04:41:09.166-04:002010-08-15T04:41:09.166-04:00The paradox arises because our intuitive concepts ...The paradox arises because our intuitive concepts of rational justification and free will are deeply flawed. If we drop such misleading words as "should" and "choice", and simply ask "in which case will I gain the most money", the answer is simple: the one-box case. I hope this knowledge would lead me to take the one box. The part of me that says I "should" take both boxes, because that's "rational" is just getting in the way of me following the most lucrative course of action. I hope I wouldn't listen to it.<br /><br />There is an equivocation in the way we use the term "rational". It both refers to the outcomes of a particular way of thinking, and is a normative judgement. The judgemental element has led us to feel a sense of discomfort at failing to take the action which rational thinking tells us is the rational one. If we set aside such feelings, we can see that what we call "rational" thinking is just a useful way of thinking that helps to secure the outcomes we prefer, but sometimes doesn't (as in the current case).<br /><br />I don't know if that makes me a "one-boxer". I don't say that taking one box is "rational" or that you "should" do it. I just say that people who take one box end up with the outcome they prefer, unless they prefer labelling their choice "rational" to having $1M. Personally, I'd prefer $1M.Richard Weinhttps://www.blogger.com/profile/18095903892283146064noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-12525463285230961982010-08-15T02:02:05.348-04:002010-08-15T02:02:05.348-04:00Julia, this horse is so beaten it's nearly glu...Julia, this horse is so beaten it's nearly glue, but from your comments it seems we still sort of disagree...<br /><br />Instrumental rationality of the what-should-I-do type, before it can even get off the ground, is going to need as input the agent's terminal values: the things the agent values more or less unconditionally as ends rather than means.<br /><br />In general, those terminal values are NOT going to be just "me." Good biologists like Williams and Trivers have shown that in fact, we terminally value kin (depending on degree of relatedness) and strangers (probably by happy evolutionary accident).<br /><br />(Aside: there are three levels of analysis here - the evolutionary, the psychological, and the ethical - but I think you follow, although theists will definitely mix them up).<br /><br />The problem with your fisherman example is this: even if you knew with 100% certainty that no external negative consequences would ever get back to you for reneging, it's STILL not rational to renege UNLESS you really don't terminally value promise-keeping (and considerations of honour in general) at all. And that describes sociopaths, essentially.<br /><br />For a person who terminally values honour even slightly, breaking an explicit promise is a *direct kick in the utility function.*<br /><br />I understand you borrow terminology from economists, and their modelling of humans as self-interest maximizers is probably a decent spherical-cow approximation to the truth for certain problems in economics, but (1) it's not actually true at all; (2) using the word "rational" for such an attitude is confusing & gratuitous, because rationality should connote a much nobler endeavour than acting like a clever sociopath.ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-84201923517510058912010-08-15T00:22:36.644-04:002010-08-15T00:22:36.644-04:00Julia,
To resolve a paradox you have to explain w...Julia,<br /><br /><i>To resolve a paradox you have to explain what the flaw is in the OTHER argument... which in this case is the fact that, as you are standing in front of those two boxes, you know that your decision now about whether to take the open box cannot physically change what is in the closed one. </i><br /><br />Given the premises that you set out, namely that the computer is somehow magically able to predict our actions, then it would know whether we're going to pick one box or two and so we should just pick the one box, the closed one. Frankly I don't see the big paradox unless you start to argue that the magic computer isn't able to do what the preconditions say it is.<br /><br />So yes, if the computer acts the way it does, the best choice is to take the closed box. If the computer acts different then our response should be different.<br /><br />And since we're risking $1 million to earn $1 thousand, we better be very, very sure that the computer is acting different!Adrianhttps://www.blogger.com/profile/08694840174170043470noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-6862463307083811542010-08-14T23:58:45.593-04:002010-08-14T23:58:45.593-04:00@Paisley,
I'd like to enjoy this discussion, ...@Paisley,<br /><br />I'd like to enjoy this discussion, so please don't make it personal.<br /><br />There need not be anything irrational about giving a probabilistic answer. Is that something we can agree on? Or is there a sticking point about the meaning of "rational"? If so, you won't like what I write. But assuming it's not, I'll proceed.<br /><br />I think it's a good idea to think of the outcome of "rational" reasoning as a probability distribution over the answers. This is especially so when the problem has uncertainty built into it (which is everything in real life). What you then do with with the distribution might include sampling from it. I think that it's unhelpful to call this irrational, because that word, in many interpretations, is reserved for a non-random reason that is outside the scope of the problem. This in contrast to a random reason that is inside the scope of the problem.<br /><br />OK, but what about the game? In this particular case, I can even use an algorithm that always returns the same value: Heads, say. Although this is not the definition of your game, in reality I can always break the tie by finding some evidence that shows that, for instance, Heads is more likely in these situations. By defining the game the way you did, you forbade this line of reasoning.<br /><br />Furthermore, the question "why Heads over Tails" is uninformative. For any given game, I can always invent an uninformative question that makes the answer, according to you, irrational.<br /><br />For instance: If you pick a number <0, you get the money. Except now I add a question: you have to explain to me why you chose the number x instead of x-1. In effect, the question is in the null-space of the answer, and hence uninformative and irrelevant.<br /><br />I guess I'm still unsure what your point is. That there are situations when there are two equally good answers? Or perhaps that there are many such uninformative questions in daily life that we, thinking the question is important, answer truly irrationally? That's something I can believe.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15005476.post-22794155792394893842010-08-14T22:17:53.987-04:002010-08-14T22:17:53.987-04:00If mutual trust among strangers had not been a fac...If mutual trust among strangers had not been a factor in the transaction, the fisherman would not have made the deal in the first place. This was a psychologically astute guy, right? But not astute enough to anticipate behavior that even in the modern world is close to being criminal?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15005476.post-59493151714761223162010-08-14T21:04:35.731-04:002010-08-14T21:04:35.731-04:00"To resolve a paradox you have to explain wha..."To resolve a paradox you have to explain what the flaw is in the OTHER argument." - Julia<br /><br />O.k., let's get back on subject then. Here's a shot. <br /><br />I propose that the Both Boxes argument (BB) is irrational because it is unsound. <br /><br />The difference between BB and the Closed Box argument (CB) is that BB depends on the participation of free will in the act of choosing between both boxes and the closed box. BB assumes that the ultimate decision cannot be predicted because the choice is free, while CB assumes that all decisions are determined, thus predictable.<br /><br />The fact is that free will is not employed in making the decision. I change my mind from both boxes to the closed box because the closed box yields mo' money. This is a predictable change because, all things being equal, mo' money is better than less money - all things being equal, I am literally unable to choose otherwise.<br /><br />Because BB depends on the premise that free will is involved in the choice, and as I have argued, free will is not a factor, then BB is unsound and there is no paradox.<br /><br />(I'm assuming that free will and determinism are understood terms, but feel free to ask for explanations)Just Some Guyhttps://www.blogger.com/profile/11306519568976890754noreply@blogger.com