By Julia Galef

I had heard rumors that Newcomb’s Paradox was fiendishly difficult, so when I read it I was surprised at how easy it seemed. Here’s the setup: You’re presented with two boxes, one open and one closed. In the open one, you can see a $1000 bill. But what’s in the closed one? Well, either nothing, or $1 million. And here are your choices: you may either take both boxes, or just the closed box.

But before you think “Gee, she wasn't kidding, that really is easy,” let me finish: these boxes were prepared by a computer program which, employing advanced predictive algorithms, is able to analyze all the nuances of your character and past behavior and predict your choice with near-perfect accuracy. And if the computer predicted that you would choose to take just the closed box, then it has put $1 million in it; if the computer predicted you would take both boxes, then it has put nothing in the closed box.

So, okay, a bit more complicated now, but still an obvious choice, right? I described the problem to my best friend and said I thought the question of whether to take one box or both boxes was pretty obvious. He agreed, “Yeah, this is a really easy problem!”

Turns out, however, that we each were thinking of opposite solutions as the “obviously” correct one (I was a two-boxer, he was a one-boxer.) And it also turns out we’re not atypical. Robert Nozick, the philosopher who introduced this problem to the public, later remarked, “To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."

Anyway, thence began my trajectory of trying to grok this vexing problem. It's been a wildly swinging trajectory, which I'll trace out briefly and then explain why I think Newcomb's Paradox is more relevant to real life than your typical parlor game of a brain-teaser.

So as I mentioned, first I was a two-boxer, for the simple reason that the closed box is sealed now and its contents can't be changed. Regardless of what the computer predicted I'd do, it's a done deal, and either way I'm better off taking both boxes. Those silly one-boxers must be succumbing to some kind of magical thinking, I figured, imagining that their decision now can affect the outcome of a decision that happened in the past.

But then of course, I had to acknowledge that nearly all the people who followed the two-boxing strategy would end up worse off than the one-boxers, because the computer is stipulated to be a near-perfect predictor. The expected value of taking one box is far greater than the expected value of taking both boxes. And so the problem seems to throw up a contradiction between two equally intuitive definitions of rational decision-making: (1) take the action with the greater expected value outcome, i.e., one-box; versus (2) take the action which, conditional on the current state of the world, guarantees you a better outcome than any other action, i.e., two-box.

So then I started leaning toward the idea that this contradiction must be a sign that there was some logical impossibility in the setup of the problem. But if there is, I can't figure out what. It certainly seems possible in principle, even if not yet in practice, for an algorithm to predict someone's future behavior with high accuracy, given enough data from the past (and given that we live in a roughly deterministic universe).

Finally, I came to the following conclusion: before the box is sealed, the most rational approach is (1), and you should intend to one-box. After the box is sealed your best approach is (2) and you should be a two-boxer. Unfortunately, because the computer is such a good predictor, you can't intend to be a one-boxer and then switch to two-boxing, or the computer will have anticipated that already. So your only hope is to find some way to pre-commit to one-boxing before the machine seals the box, to execute some kind of mental jujitsu move on yourself so that your rational instincts shut off once that box is sealed. And indeed, according to my friends who study economics and decision theory, this is a commonly accepted answer, though there is no really solid consensus on the problem yet.

Now here's the real-life analogy I promised you (adapted from Gary Drescher's thought-provoking Good and Real): imagine you're stranded on a desert island, dying of hunger and thirst. A man in a rowboat happens to paddle by, and offers to transport you back to shore, if you promise to give him $1000 once you get there. But take heed: this man is extremely psychologically astute, and if you lie to him, he'll almost certainly be able to read it in your face.

So you see where I'm going with this: you'll be far better off if you can promise him the money, and sincerely mean it, because that way you get to live. But if you're rational, you can't make that promise sincerely — because you know that once he takes you to shore, your most rational move at that stage will be to say, “Sorry, sucka!” and head off with both your life and your money. If only you could somehow pre-commit now to being irrational later!

Okay, so maybe in your life you don't often find yourself marooned on a desert island facing a psychologically shrewd fisherman. But somewhat less stylized versions of this situation do occur frequently in the real world, wherein people must decide whether to help a stranger and trust that he will, for no rational reason, repay them.

Luckily, in real life, we do have a built in mechanism that allows us – even forces us – to pre-commit to irrational decision-making. In fact, we have at least a couple such mechanisms: guilt and gratitude. Thanks to the way evolution and society seem to have wired our brains, we know we'll feel grateful to people who help us and want to reward them later, even if we could get off scot-free without doing so; or at the least we'll suffer from feelings of guilt if we break our promise and screw them over after they've helped us. And as long as we know that -- and the strangers know we know that -- they're willing to help us, and as a result we end up far better off.

Thus dies Newcomb's Paradox, at least the real-world version of it: slain by rational irrationality.

Here is my answer:

ReplyDeleteYou should flip a coin and select one box for heads and two boxes for tails. Remember that you are either a rational one-boxer or a rational two-boxer, and that the computer will predict appropriately.

# Possible outcomes

Heads for a rational one-boxer: get $1,000.

Tails for a rational one-boxer: get $1,001,000.

Heads for a rational two-boxer: get $1,000.

Tails for a rational two-boxer: get $1,000.

The most rational choice is to be a one-boxer who flips a coin. Even if the computer knows you are going to do this, the outcome will be the same because the coin flipping is an even 50/50 chance, so is irrational in relation to this decision. Having read this, you should now be a one-boxer. Here's your coin.

On the Desert Island thought experiment:

I would dispute that fleeing once you reach land is the rational choice. If the result of fleeing is that you will die, how is loosing your life a better payoff than loosing $1,000 dollars?

There might be other examples that prove your point. This one doesn't grok for me though.

I think this is plainly a

ReplyDeletepoorly posed questionrather than much of a paradox, and we can see why by considering two extreme interpretations of the situation.First, if it predicts the future

exactly, it fills the boxes accordingly. So you're either getting $1000 or $1 million - you can't get both (right?). Therefore your strategy is simply to choose the closed box.Second, if the computer is imperfect, getting both quantities of money is now possible. There is some non-zero probability that the closed box contains $1 million if you choose both. So, whether or not you should choose both depends on this (unknown) probability, which of course complicates optimal decision making.

So to answer the paradox, I suppose one could easily plot the expected decision as a function of this unknown probability?

I figure there are two situations: where the computer can be trusted to be a perfect predictor and where it cannot.

ReplyDeleteIf it can be, then taking the closed box will yield $1mill every time since it knew that's what you would do and since that's what you did, it knew it and gave you the $1mill. Silly maybe, but that's the scenario.

So let's imagine the computer is using a flawed heuristic for predicting our actions.

The computer's probability of accurately guessing out action is Pa then the expected value of picking just the closed box is Pa * 1,000,000. The expected value of picking both boxes would then be 1,000 + ((1-Pa)*1,000,000). So, if Ec is the expected value of picking the closed box and Eb is the expected value of picking both then:

Ec = 1,000,000 * Pa

Eb = 1,000 + ((1-Pa)*1,000,000)

If the computer was randomly assigning the $1mil, then

Ec = 500,000

Eb = 501,000

So picking both would be slightly in favour of picking both boxes. But if instead we imagine that the computer can guess our outcome with 80% success, then:

Ec = 800,000

Eb = 201,000

If it's "near perfect", let's pick Pa = 99.9%:

Ec = 999,000

Eb = 2,000

If the computer is slightly better than even at making predictions, the winning strategy is to pick just the closed box.

Damn, I thought I was going to be clever but James beat me to the punch. Commit to flipping a coin, break the computer. 'Does not compute!'... Remove yourself from the equation, then the computer cannot determine the outcome.

ReplyDeleteI think one his logic outcomes are broken though...

Heads for a one boxer, get $1m

Tails for a one boxer, get ~$1m

Heads for a two-boxer, get $1

Tails for a two-boxer, get $1000

The conclusion is correct though, that flipping a coin increases ones chances of getting at least some money, assuming a fairly poor predictor.

But, if the computer knows you will flip a coin, it may also 'flip a coin', and determine if it should place the $1m. So, even if you are a one-boxer, it may still only put the $1 in the box; and it may put the $1m in if you are a two boxer. It really depends if the computer only determines from your behavior, or from external factors, like a coin toss.

Of course, this is all moot. The only way you won't get $1m is if you don't pick 1 box. Its a seemingly perfect predictor. Regardless of your "locking in" before it makes a prediction, it will know what you are going to lock in to. All of this assumes a perfectly deterministic universe. Otherwise, the computer is impossible and will make the right decision on par with chance, and your best bet is 2 boxes with a 50/50 shot of getting a million. To me, the "locking in" is a red herring.

Reading ones face, on the other hand, is susceptible to deception, so I don't think its really a good analogy.

None of the answers are silly, it is the question that is 'silly'. There is not enough information about the computer to give a rational answer.

ReplyDeleteOr is there a more precise formulation of the question somewhere?

I am sure some will pounce on this, but Julia, I think you tilt the hand of your world view by associating irrationallity with paying someone an agreed price for doing something for you and how you associate being rational.

ReplyDeleteI think the solution of this paradox calls for quantum mechanics. If the computer is really clever, he would have predicted that one would try to cheat by pretending to be a one boxer and then switching to two boxes at the time of chosing. And he would have devised an unbreakable system to beat the cheaters.

ReplyDeleteHere is how it works: the content of the closed box is in a superposition of two states. In one state it contains nothing and corresponds to the two box choice, in the other state it contains one million dollar, and it corresponds to the one box choice.

The wavefunction of the closed box collapses at the moment of chosing. If you chose 1 box it collapses to one million dollars, if you chose two boxes it collapses to nothing.

So, the content of the closed box does not depend on your intentions present or past, but only on your practical actions.

Don't forget another mechanism for overcoming the real world problem: the contract. Some mechanisms, such as laws, are a matter of culture rather than genetic evolution.

ReplyDeleteJames said:

ReplyDelete"I would dispute that fleeing once you reach land is the rational choice. If the result of fleeing is that you will die, how is loosing your life a better payoff than loosing $1,000 dollars?"No, you don't die if you flee -- once you're at the point where you're trying to decide whether to flee, you've already been saved. Once you're on dry land, your choices are between fleeing (and saving your money) or paying the guy.

The key here is that the rational strategy (which in this context, simply means "self-interest maximizing") looks different from the two different points in time: before the fisherman's decided to save you, versus after.

In this hypothetical island thing - do I have $1000 or is this going to be one of those O'Henry things where I have to steal $1000 and then it turns out the fisherman is a federal agent and I end up being sent to an Island penal colony - on the same friggin Island I just escaped?

ReplyDelete@Brian

ReplyDeleteYeah, something went terribly wrong there - and now I'm confused.

Ok, retracing... the first strange thing that I think I missed on the first reading is that the chooser is not told about the computer until AFTER the boxes have been filled. Because of this I must assume that the computer would have filled the boxes according to how I would have selected before I was given the information about the computer's selection process. But then it occurs to me that if the computer is a perfect predictor, it would predict how I might react to the knowledge of the computer box filling algorithm. From there I enter a loop of changing what my initial choice would be, realizing that the computer would predict that choice, and having to change it again. This double guessing goes on infinitely.

Our idea (Brian's and I) was to halt this infinite looping with a choice made unpredictable by the ambiguity of probability; in this case, flipping a coin.

The possible coin results depend on when we abort the doubling guessing loop. The loop has two phases: one that suggests a one box option, and one that suggests the two box option. If the chooser aborts the loop on a pass that suggests selecting the closed box, Brian's possibilities play out. If the chooser aborts the loop during a pass that suggests selecting both boxes, my possibilities result.

So, o.k., I think this is right. There is no best choice. The best procedure is to abort the second guessing loop at a point when choosing the one closed box is suggested, yielding Brian's possibilities. So, yeah, my logic was off.:P

However, as you mentioned, if the computer can pick randomly as well, and will emulate this behaviour, it's a different story.

I need to nap now.

As well as the emotional strategies you mention like guilt and gratitude, there are alternative "intellectual" ones too. The one relevant to Newcomb's dilemma is called maximizing conditional expected utility, rather than conventional expected utility. This form of cognition can be inferred from data on humans playing the public good game and other behavioral economics experiments. Vested interest alert: check out my paper "A Bayesian model of quasi-magical thinking can explain observed cooperation in the public good game" at linkinghub.elsevier.com/retrieve/pii/S0167268106000977, which explains more fully how Newcomb's dilemma relates to behavioral economics.

ReplyDelete@Julia

ReplyDeleteOh, o.k., I see what you mean. In this example, even though I would convince myself (I'm on the desert island btw) that I really really will pay when I hit land, the boatman is shrewd enough to know that once we really did hit land I would change my mind now that fleeing has become the new best choice.

... hence the possible linkup to "guilt and gratitude".

This kind of paradox would certainly motivate generalizing ethical attitudes. The logic being, "if I don't act in this general way, then the boatman's paradox (or some equivalent) cannot be resolved." In a strange way, it becomes a rational way of both justifying universal duties and describing their (evolutionary?) origins.

I don't understand what the coin flipping is meant to accomplish. It may 'trick' the computer but so what, your expected value will top out at 501,000 whereas the strategy of just picking the closed box could yield 900,000 or 1,000,000 depending on how accurate the computer is.

ReplyDeleteIt sounds cleverish but seems decidedly suboptimal.

Lets not forget the orders of magnitude difference between getting the mill in the closed box and the $1k in the open one. The goal shouldn't be "how do I maximize my chances for getting both" but "how do I maximize my chances for getting the CLOSED box". If you get the $1mill, the the $1,000 is insignificant.

Julia,

ReplyDeleteI will provide you with a better example to illustrate why we (including rational skeptics) employ an element of

non-rationalityin decision-making.We will play a simple game. I will flip a coin (e.g. a quarter) into the air. Before the coin falls to the ground, you must call out either "heads" or "tails." If you make the correct call, then you will win $100,000 (I am assuming that this amount qualifies as enough incentive to play the game). However, if you do make the correct call, there is one criterion you must satisfy before collecting the money. You must provide me with a

rationalreason why you made the choice you made. (For the purposes of the this exercise, you have every reason to believe that the odds are 50-50. IOW, you have no evidence of foul-play.)Here we go...I flip the coin. You call out "heads." The coin lands on the ground with

headsfacing up. I say: "Congratulations. You made the correct call. Why did you chooseheads?"And your rational reply is what?

I had the idea of choosing randomly too, but certainly not by flipping a coin! The predicting machine will know precisely the cascade of neuron firings involved in the flip, and so its outcome is just as predictable. Better to delegate your choice to something truly random such as a quantum measurement.

ReplyDeleteUnfortunately, even when you flip a coin, the decision to act (or not act) as previously decided will still amount to making a new decision that the computer could predict, since it presumably can have anticipated any number of random influences in the interim that could influence your choice of options.

ReplyDeleteThis paradox is really stupid... "Assume you have no choice. Then what is your choice ?"

ReplyDeleteIf the prediction of the computer is perfect, there is no such thing as a strategy or a choice anyway, there is no question, no answer and no paradox.

Hm, my previous comment seems to have been deleted. Ah well.

ReplyDeleteGreat post Julia!

On the "randomizing" idea: It is a standard caveat to Newcomb's problem that if Omega (the computer) predicts you'll flip a coin, it interprets that action as two-boxing and doesn't put in the million dollars.

I find it very irksome in an otherwise good post and discussion that things like

- one-boxing

- ethical behaviour

- promise-keeping

are being labeled as irrationality. Um... what?

Rationality does NOT mean "do what Spock would do" or "do whatever a calculating psychopath would do" or "do whatever maximizes your bank balance." If you're regretting being rational, you're not being rational! And yes, your feelings and preferences and ethics ARE allowed into your considerations.

I think a better real-life Newcomb's problem might be something like this.

Suppose you're a hostage negotiator, and some criminal (call him Bob) is holding a badly injured woman (Alice) hostage. You need to tend to Alice's injuries immediately, or she'll die, so you offer Bob an unimpeded getaway if he hands her over.

Problem is, once Bob has done the handover, he's out of bargaining chips. You have him on a silver platter. Seeing that counterfactual, Bob will refuse your offer and Alice will die.

But if, a la Newcomb, you can convince Bob that you're genuinely committed to let him go, then the deal goes through and Alice lives.

Likewise, the only way to beat prisoner's dilemma is to make a believable commitment to be a cooperator. Contra standard game theory, defecting is not necessarily the rational choice.

The very important ability to make such commitments in a credible way is a perfect justification, incidentally, for consequentialists like me to follow prima facie deontological rules like promise-keeping and truth-telling.

@Tyro

ReplyDeleteYeah, it's weird.

O.k., imagine you haven't found out about the computer yet and you select both boxes because that will return the most. Then you find out about the computer, and realize: "oh, I select one box instead then." The computer, being a perfect predictor, foresees this and puts the $1 million in the box. But wait! Now your best option is to pick both boxes and get $1,001,000. But the computer knows you're going to think this, so it would leave the closed box empty.

This double thinking will continue forever unless the loop can be broken. The coin flip breaks the loop.

If you pick the closed box without breaking the loop, you will find it empty. If you pick both boxes without the coin flip, you will get $1,000.

James,

ReplyDeleteThe computer, being a perfect predictor, foresees this and puts the $1 million in the box. But wait! Now your best option is to pick both boxes and get $1,001,000. But the computer knows you're going to think this, so it would leave the closed box empty.You're risking $1,000,000 in order to earn an extra $1,000! Think about that for a sec.

The expectancy for the coin flip is $501,000 per iteration. That's not terrible but the strategy of just picking the closed box almost DOUBLES the return! I think you're optimizing a strategy which even theoretically underperforms by 50%.

Why shouldn't I just pick the closed box and accept that I'll earn "only" $1,000,000 rather than trying for an outside chance of earning $1,001,000 at the cost of a 50% chance of earning only $1,000?

Q:

ReplyDeleteIf the prediction of the computer is perfect, there is no such thing as a strategy or a choice anyway, there is no question, no answer and no paradox.Actually there is a strategy. Julia is presenting a twist on the tit-for-tat strategy of the prisoner's dilemma. In the traditional problem, you repeat the trial over an over and your opponent learns how you act. Instead of having many trials with an opponent who responds, this distills the strategy into one trial with the use of a magical, mind-reading computer. The "betray" option of picking both boxes results in a "betrayal" by your opponent - nothing in the other box. The "co-operate" option results in co-operation by your opponent - $1m in the closed box. And a random trial results in the tit-for-tat randomness by your opponent, a middle-of-the-road result.

There are valid strategies, i think.

Ask if the computer was given the information that you would be told about the computer. If yes, take the million dollar box. If no, take both boxes.

ReplyDeleteI think the question is really designed to test your comfort with the notion that a computer can predict your behavior, and to make you question the setup. Your choice will depend heavily on what you know and what the computer knows you know, and what you know the computer knows you know, and so on.

As far as the question about the fisherman, it is once again relevant whether I know how "psychologically astute" he is. I'm much more likely to decide to shaft him if I don't know. Even so, this problem involves me asking an additional question- is it worth $1,000 to me not to be an asshole?

ReplyDeleteAnother way of looking at this is not as an irrationality/rationality challenge but as a test of whether you can cope with determinism.

ReplyDeleteAt T the box is sealed and so its contents cannot be changed. At T+n you choose whether to take one or two boxes. Thinking indeterminately the two-boxer reasons that he could choose either way. So whichever prediction the computer had made the chooser could defeat it. But that's not the "set up", the "set up" requires the computer to be able to predict at T what your actions will be at T+n. This requires your actions at T+n to be determined.

If you accept that your T+n choice is determined then, if it is rationally made, your choice is to take one box and pocket the million. The only way to lose by choosing one box (ignoring the minute possibility of the computer making an error) is if the T+n choice has a degree of freedom.

So we can conclude that the one-boxers cannot accept the determinism in the "set up".

@Paisley

I have two choices: take a wild guess or refuse to play the game. If I refuse to play the game I get nothing. If I take a wild guess I have an expected value of 0.5 x ($100,000 subject to explaining my rationale).

0.5 x ($100,000 subject to explaining my rationale) is more than 0, so I made a wild guess.

ReplyDeleteYour choice will depend heavily on what you know and what the computer knows you know, and what you know the computer knows you know, and so on.Do the numbers like I did above. Since the $1,000,000 dwarfs the $1,000, the computer only has to be marginally better than 50% for the optimal winning strategy to take just the closed box.

Don't overthink the details or you'll miss the bigger picture.

If the computer predicts perfectly, then it shall not let you take money from both boxes, so the obvious solution is one box.

ReplyDeleteThere's no contradiction here, but there would be one if neither box was closed. In that case, you could see the computer's prediction and do the opposite, contradicting the condition that the computer predicts perfectly.

This shows why it's impossible to perfectly predict someone's future AND give away the prediction. Thus, you can't predict your own future either. This reasoning is similar to the proof that the Halting problem is undecidable.

I read this as one question with multiple interpretations -- each with pretty clear answers.

ReplyDeleteSo

what defines a paradox?What makes a particular paradox worth holding up above other confusing/ill-posed/confusing statements?Here's an easy example to prove that you can't predict someone's future and tell him about it. Just imagine a really stubborn person who always does the opposite of what he's told. He's very predictable, yet as soon as you tell him the prediction, he'll do the opposite.

ReplyDeleteThe Newcomb's Paradox set-up simply makes everyone want to do the opposite of what's predicted for them. Like, if you find out that you're predicted to take one box, then you'll want to take both boxes, since both contain money in that case.

Yes, the premise is stupid. Although studies show that it is easier than one would might to predict the psychology of one-box vs two box from other questions, there is no such thing as an infallible predictor, and creating one creates a silly premise.

ReplyDeleteOn the other hand, the psychology involved is interesting and real and in my opinion should be studied using more realistic setups.

Amos Tversky pointed out that the one-shot prisoner's dilemma is actually rather similar. Cooperation is around 35%. If subjects are told that they are in a special group being told what the other guy already did, and he defected, almost nobody cooperates. If told he cooperated, it's about 15%. So we see some reciprocity, but more bizarrely, since the other guy must either cooperate or defect, the standard 35% is not between 0 and 15, where it should be. The reason is "quasi-magical thinking". Although subjects know that they cannot causally affect the outcome, they continue to act as if they could.

Note there is a genuine correlation: the other guy is a data point drawn from the same population sample as you are. If you don't know what behavior is "typical" for others, then there is genuine information there in your own choice. But not a causal influence whereby if you cooperate, the other person is more likely to do so.

The core issue here is treating correlation as causation. And the effect can be studied without any need for omniscient beings.

I recall reading somewhere that this problem is quite common in game theory. In a non-zero-sum game, deliberately restricting one's own choices can be beneficial. The example I remember was a hostage situation: the police can meet the hostage-taker's demands, or they can refuse. If they refuse, the hostage taker can kill his hostages and then get killed himself, or he can surrender and go to jail.

ReplyDeleteThe police would prefer surrender, but would rather meet the demands than get everyone killed. The hostage-taker would prefer to have his demands met, but would prefer surrender to death.

It's advantageous for either side to somehow exclude their middle option. If the police cannot possibly meet the demands, then the hostage taker will surrender. If the hostage taker cannot possibly surrender, the police will meet his demands.

"You're risking $1,000,000 in order to earn an extra $1,000! Think about that for a sec." - Tyro

ReplyDeleteWell, that's only the case with my original numbers, which, as Brian pointed out, were wrong. The idea was right though.

The real numbers if you pop out of the loop at the closed box choice is...

heads (closed box): $1,000,000.

tails (both boxes): $1,001,000.

Which is $500 better than $1,000,000. Remember, if you pop out of the loop at the closed box choice, there will be $1 million in the closed box.

ReplyDeleteThe real numbers if you pop out of the loop at the closed box choice is...

heads (closed box): $1,000,000.

tails (both boxes): $1,001,000.

Which is $500 better than $1,000,000. Remember, if you pop out of the loop at the closed box choice, there will be $1 million in the closed box.

Why are you figuring that there will always be $1m in the closed box? Wouldn't the computer hen be conducting an internal coin flip giving:

heads (closed box): $1m 50%, $0 50%

tails (both): $1,001,000 505, $1,000 50%

expectancy is:

heads: $500,000

tails: $501,000

Having just read through the comments, I'm not sure that everyone sees quite how disturbing Newcomb's Paradox is. Hopefully I can help Julia freak you out. Consider the following logic:

ReplyDeleteLogic 1: Once I'm playing the game, the boxes are sealed and the predictor cannot change its decision. Either it put the $1 million in the sealed box or it didn't. Regardless of which choice it made, I'm better off taking both boxes because then I get the money from both, rather than just the money from the open box. Hence, once the game has begun I'm ALWAYS better off taking both boxes.

Logic 2: If I choose both boxes, then the predictor will have known I was going to do that, and therefore the closed box will be empty so I will get just $1000. If I choose just the closed box, then the predictor will have known I would do that as well, and hence I will get $1 million. Therefore, I'm ALWAYS better off taking taking just the closed box.

What's so weird about this is that it really seems both logics are valid, and yet Logic 1 says I should always take both boxes, and Logic 2 says that I should always take just the closed box. So which is right?

As Julia points out, once the game is started Logic 1 is right, since the predictor has no ability to change where the money is placed at that point. But BEFORE the game starts, Logic 2 is right, and you'll be better off overall if you somehow convince yourself not to apply Logic 1 in the future (this will maximize the money that you make, and if money maximization is your goal, is therefore the rational decision). Hence, in a sense, Logic 1 is the locally optimal behavior, and Logic 2 is the globally optimal one.

By the way... even if the predictor can predict what I am going to do with near 100% accuracy, that doesn't imply that I don't have any "choice" or "free will" regarding what I do. One way to think about this is to suppose that the predictor has the ability to time travel. Its prediction method could be as follows: Put money into both boxes and then (before the game starts) time travel into the future to see what decision I make during the game, and then time travel back and (again, before the game starts) remove the money from the closed box if I choose both boxes in the future. The predictor in this case is just figuring out my choice, not taking my choice way. Of course, time travel may not be possible, and the time travel idea may have other theoretical difficulties, but hopefully this illustrates the point that near perfect predictability does not eliminate the possibility of "choice". In a similar vein, just because a friend of mine can predict with near 100% accuracy that I will choose chocolate over vanilla, that doesn't imply that I'm not making a genuine choice.

Like I said on Facebook, I just do not get how this is actually a paradox rather than testing how risk-averse one is, depending on how many times you are allowed to play the game.

ReplyDeleteIf you're seriously risk-averse and you're only allowed to play once, you'll be a two-boxer because then at least you'll get $1000 with a small chance at $1 001 000. You'd never be a one-boxer because of the chance you'll get nothing.

But as soon as you're allowed to play more than once, things change dramatically.

If the Predictor is right 99% of the time and you play 100 times, a two-boxer will (probably) get 99*$1000 + 1*$1001000 = $1100000 but a one-boxer will get 99*$1000000 + 1*$0 = $99000000. That's 90 times what the two-boxer will earn! Even the very risk-averse will see the benefits to converting to one-boxer-ism.

The only way being a two-boxer becomes the superior strategy to being a one-boxer is when you can play multiple times and if the Predictor really isn't as accurate as postulated instead only being as good as chance.

To me it seems to be nothing more than a simple odds game and which strategy you chose depend on how willing you are to risk. So is there something I am missing?

Of course, the *real* question in this is whether it's Canadian $ or US $. :-)

Look, this is clearly an unusual computer that can understand your purposes as well as your intentions. It knows you flipped the coin to try to fool it, and if the results don't come up as hoped, you'll veto the advisory, and choose, predictably, to override the flip. Because your purpose was to make the choice the computer knew you wanted from the getgo.

ReplyDeleteCool problem. Reminds me of that Mystery Men scene:

ReplyDeleteCaptain Amazing confronts Casanova Frankenstein

Humorously said, but I'm not sure this is a true paradox. I would only ever choose 1 box if I believed the computer's prediction would be accurate. (Specifically, if I believe (the computer believed (I would believe (the computer's prediction would be accurate)))). In all other cases, the sensible choice is 2 boxes.

It's tempting to add, "If I believe the computer predicted 1 box, I should choose 2" but this contradicts the above premise of my belief in the computer's accuracy.

Does that grok? Because I think I might've confused myself. I'ma huddle in fetal position now, and watch Mystery Men.

Level 1 of understanding: Somebody is offering you $1000, all you need to do is take both that box another one which may have more money in it. Someone says that a computer makes it possible to win more by doing this and that, but it reads like fine print, not too sure just what it is saying - so take the $1000 and run.

ReplyDeleteLevel 2 of understanding: We hear tell that a computer says if you only take the closed box, you win $1,000,000. Take only the closed box, and quit your job.

Level 3 of understanding: A computer says it knows your decision-making profile, others say it is near-perfect. So although you've never played this game before, others have. Trust the computer even while calculating the odds, allowing for a .0000001% probability that the computer makes an error. Based on expected values others have commented on, it looks like you still win $1mm, so take only the closed box, and have a letter of resignation ready, but do not e-mail it. until you have opened the box.

Level 4 of understanding: What?? Are players of prior games upon whose actions the computers projected success rate is predicated available for interviews? Any of them just flip coins, as per James? Has the player even been told about the computer? If computers results are near perfect is it really traveling in time before deciding what goes in the closed box? Still trust the story? Too little clarity, take both boxes.

ClockBackward,

ReplyDeleteIf the predictor is 100% accurate, then by definition you can't outsmart it and get money from both boxes, so don't even try.

There's an illusion of reverse causation: as if your "free-willed" decision to open the box causes the predictor to empty the box in the past. But the reality is that the predictor knew your decision before you ever made it.

People get causation backwards in real life too. Like, they'll hear that smart people are more likely to wear glasses, and think that if they start wearing glasses, it'll indicate that they're smart.

It seems to me pretty obvious that one MUST flip a coin (or otherwise make a random choice). Having 50% chance is a lot better than the alternative. Any other option, if the computer really is that good a predictor of one's decisions, will entail that one fails.

ReplyDeleteThere is one and only one rational choice: pick randomly.

Julia,

ReplyDeleteGreat post with a lot of play-at-home value. BUT

I'm with @IanPollock. Calling sociopathic behavior "rational" is problematic. "Self interest maximizing" at first blush seems more accurate, but it's even more self defeating, because once you admit "self interest" you must allow that there are many values at play other than money. In fact, the more we imagine a human making these decisions, the more we can admit that "self interest" includes keeping a promise. Why? There are tons of evolutionary, psychological and aesthetic reasons. Let's be clear that those things are also "rational."

So you could honestly commit to making the rational-rational choice of paying the fisherman. You could be very knowledgeable about yourself and your species and say, "I absolutely promise, but the longer you wait for me to pay you, the less likely it is I will. It would help if a lot of people know about our bargain and you and I were of the same tribe or family, and while we're at it, why don't I sign a contract?" If you were a marooned termite you might say "no, I will not pay nor will I get in your boat, because my death is of greater value to the hive and my genome." This would also be a "rational" response.

Any rational world view has to start with the scientific understanding of who we are and how we got here. Reason, Math, Game Theory-- these things did not descend on the world from on high, but come from it, are made out of it. The interesting question isn't "why aren't we more rational?" It should be "how did we come to be rational at all?"

And don't get me started on determinism voodoo that is sprinkled on these thought experiments. Unlike the Monty Hall Problem, these thought experiments don't reveal any counter intuitive mathematical fact, just religious conundrums. They are tantamount to "can god make a rock that he cannot lift?" As such, they are fun, but inherently unproductive.

Tony Lloyd: "

ReplyDeleteI have two choices: take a wild guess or refuse to play the game. If I refuse to play the game I get nothing."Okay, your response helped me to identify a

loopholein my game argument - namely, the stipulation or criterion you must satisfy before collecting the money. Let me modify that stipulation as follows: "You must provide me with a rational reason why you made the choice you made OR acknowledge that even 'rational' skeptics sometimes make nonrational decisions."Having said that, "refusing to play the game" is not rational. This is easy money. The rational thing to do is to play the game and make easy money! Also, "wild guesses" are nonrational. (Rational people do not make rational guesses based on "wildness.")

@Paisley

ReplyDeleteSince the coin is 50-50%, it is perfectly rational to choose a sample from {heads, tails} with equal probability.

Just because the probabilities are 50-50 (instead of, say, 60-40) doesn't make the decision irrational, just a special case where two options have equal probability.

How reliable is this computer behavior predictor? Is the computer manufacturer so confident in his computer that he is willing to award $1,001,000 to someone who can prove it to be unreliable?

ReplyDeleteIf so, then call me a 1-boxer.

Regarding the more interesting fisherman rescue, how is it "rational" to break a contract with a fisherman who has rendered me a service worth much more than the asking price? If he is as good at reading sincerity as is claimed, my sincerity will be off the charts.

Finally, since almost everything a philosopher needs to know can be found in Star Trek, I am reminded of the episode "The Galileo Seven," in which the avowedly unemotional Spock achieves the rescue of his crew through a final act of desperation. Spock is accused of committing an emotional act, thus breaking his vow to act purely through logic and not emotion. He defends himself by explaining that the situation logically required an act of desperation.

To some extent the "paradox" is really just a way of acknowledging how trapped we can be by the imprecise language we use. "Rational" does not mean believing someone who claims to have a mindreading computer without substantial evidence, nor is incorporating elements of chance in decision-making "irrational", such as the aforementioned coin-tossing, which is what came immediately to my mind.

It's interesting to see the suggestion of flipping a coin here, because the recommendations all rely on minimizing the accuracy of the predictor. In other examples of this paradox, the predictor is suggested to be a deity - the point being that we presuppose the predictor will make an accurate decision.

ReplyDeleteThe result here is that effect precedes cause - the contents of the box are conditional upon the decision that we make. In this sense, we must simply come to terms with determinism, and enjoy our $1000 or $1,000,000.

The only way to exercise some control over the situation is indeed to thwart the accuracy of the predictor. If we assume a deity or equally omniscient computer, then the question becomes: can we produce a decision that is truly random? Presumably with enough computational power, we could accurately predict the result of a coin flip, given all data involved. If we have a means of making a random decision that cannot be predicted by the predicting entity (which has already been defined to be omniscient), then we have some control over the outcome.

Of course, given the disparity between the amounts, we really all should just decide right now to stick with the one-box policy, in case we ever find ourselves in this situation ;-)

Reading over all the comments, I have two main responses:

ReplyDelete1) In this context, by "rational" I simply meant "self-interest maximizing". It's a very specific definition that I should have specified up front (blame it on all the years I spent with economists; I'm juist used to that being the implied definition.) If you don't like that definition, that's fine; you can just mentally replace "rational" in my post with "self-interest maximizing" if you prefer.

So to all of you who argued that breaking a promise isn't rational because it would make you feel bad: you're just taking for granted the phenomenon which I was trying to explain; the bad feeling exists precisely because without it, rational decisionmaking would lead to very suboptimal outcomes. At least, that's what I was suggesting.

2) A lot of you have said the paradox is easy because one-boxing leads to better outcomes, by stipulation of the problem, so obviously that's the right answer. But you can't resolve a paradox just by arguing why one side seems logical. Of course it seems logical; the whole reason there IS a paradox is that one seemingly logical argument contradicts another seemingly logical argument. To resolve a paradox you have to explain what the flaw is in the OTHER argument... which in this case is the fact that, as you are standing in front of those two boxes, you know that your decision now about whether to take the open box cannot physically change what is in the closed one.

Dan: "

ReplyDeleteSince the coin is 50-50%, it is perfectly rational to choose a sample from {heads, tails} with equal probability.

"Just because the probabilities are 50-50 (instead of, say, 60-40) doesn't make the decision irrational, just a special case where two options have equal probability. .

The decision to make a decision is a rational decision. But that is not what I am asking you. What I am asking you is to provide me with the rational reason why you chose "heads" instead of "tails." You have failed to meet that requirement; therefore, you do not collect the prize money. Furthermore, you have clearly demonstrated that you are

irrational. Why? Because you had the opportunity to win easy money by simply making therationaldecision to acknowledge that "even 'rational' skeptics sometimes make nonrational decisions." Instead, you decided to succumb to an irrational emotion (i.e. pride).Julia, breaking the promise to the man in the rowboat is irrational for more reasons then making you feel bad; it subjects you to serious retaliation, and tends to ruin your reputation for honesty (assuming you had one) and worse, your self-trust will suffer. Hardly self-interest maximizing.

ReplyDeleteArtie, the choice of the fisherman was meant to represent a situation in which you're dealing with a stranger, someone not in your social circle, who doesn't know you and whom you're very unlikely to ever encounter again. You are therefore very unlikely to suffer retaliation or reputation effects.

ReplyDeleteSuch situations would indeed have been unrealistic in the past, when people's social circles were small and pretty much fixed, but it's not that unrealistic in the modern world.

"To resolve a paradox you have to explain what the flaw is in the OTHER argument." - Julia

ReplyDeleteO.k., let's get back on subject then. Here's a shot.

I propose that the Both Boxes argument (BB) is irrational because it is unsound.

The difference between BB and the Closed Box argument (CB) is that BB depends on the participation of free will in the act of choosing between both boxes and the closed box. BB assumes that the ultimate decision cannot be predicted because the choice is free, while CB assumes that all decisions are determined, thus predictable.

The fact is that free will is not employed in making the decision. I change my mind from both boxes to the closed box because the closed box yields mo' money. This is a predictable change because, all things being equal, mo' money is better than less money - all things being equal, I am literally unable to choose otherwise.

Because BB depends on the premise that free will is involved in the choice, and as I have argued, free will is not a factor, then BB is unsound and there is no paradox.

(I'm assuming that free will and determinism are understood terms, but feel free to ask for explanations)

If mutual trust among strangers had not been a factor in the transaction, the fisherman would not have made the deal in the first place. This was a psychologically astute guy, right? But not astute enough to anticipate behavior that even in the modern world is close to being criminal?

ReplyDelete@Paisley,

ReplyDeleteI'd like to enjoy this discussion, so please don't make it personal.

There need not be anything irrational about giving a probabilistic answer. Is that something we can agree on? Or is there a sticking point about the meaning of "rational"? If so, you won't like what I write. But assuming it's not, I'll proceed.

I think it's a good idea to think of the outcome of "rational" reasoning as a probability distribution over the answers. This is especially so when the problem has uncertainty built into it (which is everything in real life). What you then do with with the distribution might include sampling from it. I think that it's unhelpful to call this irrational, because that word, in many interpretations, is reserved for a non-random reason that is outside the scope of the problem. This in contrast to a random reason that is inside the scope of the problem.

OK, but what about the game? In this particular case, I can even use an algorithm that always returns the same value: Heads, say. Although this is not the definition of your game, in reality I can always break the tie by finding some evidence that shows that, for instance, Heads is more likely in these situations. By defining the game the way you did, you forbade this line of reasoning.

Furthermore, the question "why Heads over Tails" is uninformative. For any given game, I can always invent an uninformative question that makes the answer, according to you, irrational.

For instance: If you pick a number <0, you get the money. Except now I add a question: you have to explain to me why you chose the number x instead of x-1. In effect, the question is in the null-space of the answer, and hence uninformative and irrelevant.

I guess I'm still unsure what your point is. That there are situations when there are two equally good answers? Or perhaps that there are many such uninformative questions in daily life that we, thinking the question is important, answer truly irrationally? That's something I can believe.

Julia,

ReplyDeleteTo resolve a paradox you have to explain what the flaw is in the OTHER argument... which in this case is the fact that, as you are standing in front of those two boxes, you know that your decision now about whether to take the open box cannot physically change what is in the closed one.Given the premises that you set out, namely that the computer is somehow magically able to predict our actions, then it would know whether we're going to pick one box or two and so we should just pick the one box, the closed one. Frankly I don't see the big paradox unless you start to argue that the magic computer isn't able to do what the preconditions say it is.

So yes, if the computer acts the way it does, the best choice is to take the closed box. If the computer acts different then our response should be different.

And since we're risking $1 million to earn $1 thousand, we better be very, very sure that the computer is acting different!

Julia, this horse is so beaten it's nearly glue, but from your comments it seems we still sort of disagree...

ReplyDeleteInstrumental rationality of the what-should-I-do type, before it can even get off the ground, is going to need as input the agent's terminal values: the things the agent values more or less unconditionally as ends rather than means.

In general, those terminal values are NOT going to be just "me." Good biologists like Williams and Trivers have shown that in fact, we terminally value kin (depending on degree of relatedness) and strangers (probably by happy evolutionary accident).

(Aside: there are three levels of analysis here - the evolutionary, the psychological, and the ethical - but I think you follow, although theists will definitely mix them up).

The problem with your fisherman example is this: even if you knew with 100% certainty that no external negative consequences would ever get back to you for reneging, it's STILL not rational to renege UNLESS you really don't terminally value promise-keeping (and considerations of honour in general) at all. And that describes sociopaths, essentially.

For a person who terminally values honour even slightly, breaking an explicit promise is a *direct kick in the utility function.*

I understand you borrow terminology from economists, and their modelling of humans as self-interest maximizers is probably a decent spherical-cow approximation to the truth for certain problems in economics, but (1) it's not actually true at all; (2) using the word "rational" for such an attitude is confusing & gratuitous, because rationality should connote a much nobler endeavour than acting like a clever sociopath.

The paradox arises because our intuitive concepts of rational justification and free will are deeply flawed. If we drop such misleading words as "should" and "choice", and simply ask "in which case will I gain the most money", the answer is simple: the one-box case. I hope this knowledge would lead me to take the one box. The part of me that says I "should" take both boxes, because that's "rational" is just getting in the way of me following the most lucrative course of action. I hope I wouldn't listen to it.

ReplyDeleteThere is an equivocation in the way we use the term "rational". It both refers to the outcomes of a particular way of thinking, and is a normative judgement. The judgemental element has led us to feel a sense of discomfort at failing to take the action which rational thinking tells us is the rational one. If we set aside such feelings, we can see that what we call "rational" thinking is just a useful way of thinking that helps to secure the outcomes we prefer, but sometimes doesn't (as in the current case).

I don't know if that makes me a "one-boxer". I don't say that taking one box is "rational" or that you "should" do it. I just say that people who take one box end up with the outcome they prefer, unless they prefer labelling their choice "rational" to having $1M. Personally, I'd prefer $1M.

P.S. I'd like to clarify. I wrote:

ReplyDelete"I just say that people who take one box end up with the outcome they prefer, unless they prefer labelling their choice 'rational' to having $1M. Personally, I'd prefer $1M."I didn't mean to imply that two-boxing _is_ rational. I don't think we can usefully say that either course of action is rational. This scenario undermines our notion of rationality.

Julia,

ReplyDeleteThe flaw is in the free-will illusion that even if there's money in the closed box, you can still decide to take both boxes. But taking both boxes goes hand-in-hand with the closed box being empty from the outset.

Now, if you were allowed to peek inside the closed box, THEN you could violate your prediction, which is why this set-up is unsound.

Following those above, I agree this does not seem like a paradox to me.

ReplyDeleteTo clarify on the setup, part of the computer's analysis is knowing that you are going to know about the computer before you make your decision, that that was calculated in the setup of the boxes. In other words, the computer has an omniscient-like power about events going forward, included the chooser's process of reasoning.

If that is right, then the answer is one box and it is not a paradox, as others said. "The box was sealed" before I start my thinking processes about how many boxes to take, but that sealing happened with the omniscient-like power (as given in the problem) of the computer knowing everything going forward. Knowing that the computer would have foreseen my reasoning processes, once I get the information that the computer is omniscient-like, I just take the one box, the computer would have foreseen this, and there is a million dollars in the box.

It is simply the free will/determinism dilemma, within a fully deterministic world no matter how much I flail about, no matter how much I try to step out of the deterministic chain by doing something that seems outside of "normal behavior," I am always just acting within I may do something that seems "random" or non-deterministic, but if my "reasoning processes" are part of the determined chain, then I never step outside of such a determined world, even if I run and jump off of a cliff just to prove that I have (libertarian) "free will."

If somebody really thinks Julia's original setup is a paradox or if I am miss interpreting it, I would like to hear more information of why, since I am apparently missing something.

Dan: "

ReplyDeleteI'd like to enjoy this discussion, so please don't make it personal."I am sorry if you were offended by my previous post. I was not attempting to make it personal; I was attempting to make a point. Evidently, I did not succeed in that respect.

Dan: "

There need not be anything irrational about giving a probabilistic answer. Is that something we can agree on?"Determining the actual probabilities of a given situation is a rational endeavor. That is the only thing I will concede.

Incidentally, I am drawing a distinction between "nonrational" and "irrational." (Rationality itself requires an element of nonrationality in order to function fully. Or, in other words, the

analyticalmind is lame without the services of theintuitivemind.)Dan: "

Or is there a sticking point about the meaning of "rational"? If so, you won't like what I write. But assuming it's not, I'll proceed."You are correct. We have a basic disagreement on the meaning of

rationalityandrandomness. Arandomchoice isonlyrational to the extent that a certain situation may dictate that some kind of choice be made (I trust that you can think of many examples where this would indeed be the case). However, a random choice isnotrational to the extent that theactualchoice that is made is based on randomness. (There is no rhyme or reason for randomness. Indeed, this fact is typically employed by determinists to discredit "free will." )Dan: "

OK, but what about the game? In this particular case, I can even use an algorithm that always returns the same value: Heads, say."You could do that. But there is no rational reason why you should favor "heads" over "tails" except that making some kind of choice is the rational thing to do. Again, I am simply repeating myself. The counterargument remains the same. Making some kind of choice is rational; the actual choice made is not.

Below is an actual line of code in a computer program (written in Perl) which uses a

random built-in functionto "randomly" pick a number between "1" and "10." However, this "choice" is completely predetermined because the random built-in function is based on the internal clock of the computer. It only gives theappearanceof randomness. But the point I am seeking to make here is that the illusion of "free will" (or an element of nonrationality) is required in order to function rationally.$PickANumber = int(rand(10)) + 1;Dan: "

Although this is not the definition of your game, in reality I can always break the tie by finding some evidence that shows that, for instance, Heads is more likely in these situations. By defining the game the way you did, you forbade this line of reasoning."You are right. This was not in the rules. Besides, it would actually be irrational to engage in such analysis because you only have a matter of seconds to make the choice. By attemopting to over-analyze the situation, you are guaranteed to walk away with nothing.

Dan: "

I guess I'm still unsure what your point is. That there are situations when there are two equally good answers? Or perhaps that there are many such uninformative questions in daily life that we, thinking the question is important, answer truly irrationally? That's something I can believe."The point is that we are forced to employ an element of

nonrationalityin order to function in life.ON DETERMINISM (which I don't think is the point of Newcomb's Paradox, but I think is its flaw. It's supposed to be about rationality, but it's really about our religious beliefs in determinism)

ReplyDeleteJulia, both you and Massimo seem much more comfortable with Determinism than I think a rational skeptic aught to be. If not identical arguments, than at least very very similar once can be brought to bear on Intelligent Design and the Anthropic Principle as can be brought against Determinism. Why do you employ the former but not the latter?

William Egginton's "code of codes" argument is compelling:

"The reason for this is that when we assume a deterministic universe of any kind we are implicitly importing into our thinking the code of codes, a model of reality that is not only false, but also logically impossible."

Now, allow me to stoop to a thought experiment of my own.

ONEDAYMORE'S PARADOX

There is a computer that can know what Galen Strawson calls "the way you are." It knows exactly what choice you will make in whatever box opening game you want to contrive. The computer then tells you to do the opposite of the choice you would have made. It's so good at understanding you, that it knows exactly how to incentivize you to make the non predetermined choice. Has the computer defeated determinism? How is this computer not God? How is this computer different from the one Newcomb proposes?

Sorry, my last post was grammatically bad at the end.

ReplyDeleteJulia's hints at why the two box option is worse, but she still thinks that her initial reaction makes sense and thus the problem is a paradox, she says:

"Regardless of what the computer predicted I'd do, it's a done deal, and either way I'm better off taking both boxes. Those silly one-boxers must be succumbing to some kind of magical thinking, I figured, imagining that their decision now can affect the outcome of a decision that happened in the past."

That line of reasoning does not begin to make sense to me. I assumed (and still do) that the computer is predicting our reasoning processes all the way through the final choice being made, that such predicting ability is what makes this computer special. If that is the case, and our final decision, made after we learn the ability of this computer, will be to take the one box, then there will be a million dollars in it. Why would we take the two boxes if we have faith in the predicting power of the computer, which was given in the setup of the problem?

This paradox is similar to Kavka's toxin paradox, which is also interesting and illuminating about intentionality. In the toxin puzzle I accept the viewpoint that one has to not only form an intention to drink the toxin, but must also carry out the drinking of the toxin in order to gain. Newcombe's problem seems the same, but without any great harm in simply taking the one box and maximizing your reward.

I think the decision hinges on your belief that a computer can really predict people's choices. When I heard it said there is a computer that can predict what choice specific people will make, on some level I responded, "Yeah, whatever." People are used to being told BS on a regular basis and I think you sometimes make an immediate gut level acceptance or rejection of information. For me, the ability of the computer to predict people's choices sounded implausible. I think this made me immediately assume the two boxes was the best choice (in addition to the fact that the choice had already been made). However, if I really believed the computer could actually predict what I would do then I think I would have chosen just the closed box. The computer sounded a little too magic to me and so didn't fit in my worldview.

ReplyDeleteIt seems to me that the difference between the one-box and two-box arguments depends on a distinction between causality and correlation.

ReplyDelete- The two-boxer says that the contents of the box are already determined, so my choice can have no causal effect on the contents of the box, and therefore they'll be the same whatever I do.

- The one-boxer just looks at correlation. There are only two possibilities:

A. The closed box contains $1M and I take that one box.

B. The closed box is empty and I take both boxes.

So I will only have the $1M if I one-box. One-boxing doesn't _cause_ me to get the $1M. The events are merely correlated.

I think the two-boxer is saying something either wrong or meaningless when she says the box contents will be the same "whatever I do" or "either way I'm better off taking both boxes" (as Julia put it). Which two "ways" is Julia thinking of? There are only two possible sequences of events (A and B above), and she's better off with sequence A (one-box). If she two-boxes she _will_ be worse off (though that's not caused by her two-boxing).

I think the two-boxer is following a habitual causal way of thinking which is usually effective, but leads her to make incorrect or meaningless statements in this case.

P.S.

ReplyDelete1. The one-box argument I gave above isn't what we would usually recognise as an argument. It merely observes a correlation. It doesn't draw any conclusion from it. It's drawing conclusions that's problematic here. Nevertheless, I believe the mere observation of this correlation would incline me to one-box. If we saw lots of other people playing the game, with all the one-boxers winning big bucks and all the two-boxers failing to do so, I'm pretty sure almost everyone would start one-boxing.

2. The two-box argument faces up to the challenge of making what we would recognise as an argument. But the argument is a failure.

I think it's been said that there are seemingly good arguments for both alternatives. My conclusion is that there isn't a good argument for either alternative. But, if we put aside arguments and just allow our instincts to take over, I think our basic inclination to follow observed correlations will lead us to one-box, especially once we've seen the success of previous one-boxing.

Computers are programed by humans, not Gods. As a programmer I would always take both boxes, it's a no brainer really. I do not feel a need to make a particularly compelling argument OneDayMore quickly gets to the meat of the problem.

ReplyDeleteJudging by the responses, the "paradox" appears to be the conflict between the question that's asked involving a magic computer and another, different question that the reader is making up on the fly without the magic computer.

ReplyDeleteThat's not a paradox, that's just a poor analysis.

To those people who imagine the computer can't read thoughts and want to answer a different question: you aren't going far enough! If the computer isn't perfect, you MUST state how accurate you imagine the computer really is.

If your new computer is even 51% accurate, taking both boxes is a LOSING strategy. So by all means, invent your own question but tell us what new assumptions you're using. If you say "it's impossible", you may be right about that point but even with revised abilities, your analysis still fails. Taking just the closed box can still be the best strategy!

1. Funny how your "real-life" example involves being stuck on a deserted island. Happens to me all the time.

ReplyDelete2. I'm surprised that no one has mentioned that the fisherman is a total jackass. $1,000 to save your life?!? Club him with an oar and throw him overboard! (Then offer to save him if he pays you $1,000.)

3. Also, count me in with the one-boxers. If this is a thought experiment, then the computer is magic, and it can do what it says. Don't f*** with it. In real life, two boxes all the way. Especially if Microsoft had anything to do with it.

this is great,

ReplyDeletei started thinkn about it being simplified into two options a or b.

a gives you $1 000

b gives you $1 000 000

it seems clearer then :)

RichardW, yes, the conflict between causality and correlation is at the heart of this problem. The two rules pretty much always recommend the same course of action, except here, which is why it's so disconcerting (at least to me! and lots of other people!).

ReplyDeleteTo everyone: I think I should've picked a different example than the fisherman one, because it unnecessarily incorporated a selfish-vs.-altruistic tradeoff when really all I wanted to isolate was the suboptimal outcome that results from being unable to credibly pre-commit.

A friend of mine suggested this scenario instead: you're being held hostage, and the kidnapper would be willing to let you go if he could be certain you weren't going to turn him in to the police once you're free. You can't credibly make that promise, because you know that once you ARE free you have no reason to keep it. And he knows you know that. So you end up worse off.

Finally: if there's one lesson I learned from this post and its comments, it is not to assume ahead of time that everyone else will be using my same definition of "rational"! Duly noted, all.

Earlier in this comment thread I defined rational action as self-interest maximizing, but in retrospect I think I was being unnecessarily restrictive; I should just have said that it's maximizing whatever you happen to value. So yes, if you value promise-keeping more than money then it's not irrational to keep your promise. That's why it was a confusing choice of example on my part; I should've used the hostage example.

But the part of rational decisionmaking that I think really IS central to the definition is that you're picking the action that will lead to the best outcome

conditional oneverything outside your control at that point in time. Which is why it feels paradoxical to many people to imagine standing in front of those two boxes, knowing that their contents are outside your control now that they're already prepared, and that two-boxing seems like the best choice conditional on whatever is currently in the closed box.Newcomb's Paradox is not a paradox, it is a fallacy. The two lines of reasoning are:

ReplyDeleteTwo-Boxer:

1) The $1M is either in box B or it is not.

2) How I choose cannot change 1)

3) Choose both, as I can at least get the $1k and I can possibly get the $1M on the off-chance that the computer was wrong.

One-Boxer:

1) The computer has near-perfect prediction skills

2) My choice has been predicted to near-perfect accuracy.

3) Choose B, as predicted, and get the $1M. Get nothing on the off-chance that the computer was wrong.

The problem with the two-boxer reasoning is that it involves an implicit assumption that there is an equal probability of the $1M being in the box v. not in the box. But the statement of the computer's ability makes it clear that the probability of correct prediction is much higher. If you run through the expected outcomes, it is only best to choose two boxes if the predictor's success rate is less than 50%. Why on earth do you think the success rate is less than 50% when the very statement of the situation is that it's success rate is "near-perfect?"

it occurs to me it may be useful as a psycho analytical tool. perhaps correlated to an intuitive imaginative sense or somesuch.

ReplyDeletethis has occurred to me through Marios' work relating to the disconnected fields of philosophy and science.

where perhaps a person of more philosophical nature would assume the one box, and the more scientific the two boxes. In this case, the application of reasoning within differing personal contexts.

Another thought is that it is apparently easier to accept the concept of someone giving you a box of $$$, than to accept a near perfect predicitive computer.

Hence obviously the two boxes.

It seems I consider the two boxes the restrictive/idiot side of reasoning :)

i amusingly thought of another parrallel example.

the open box contains a life of self serving pleasure.

the closed box contains an afterlife of nothing or bliss.

there is a God that controls the content of the closed box dependant on whether you choose both boxes or just the closed box.

the imperfect choices by God may require selling your soul :)

I'm working on a project to test responses to this problem. If you have an opinion and wouldn't mind, stop by

ReplyDeletehttp://www.surveymonkey.com/s/YQ3LSMM

and tell me what you think. It's only 4 questions long!

I think most of the posts here just show that they will never get the million because they have rationally concluded that statistically taking the two boxes is the best option. The computer has already figured out they are the type of person who would do this.

ReplyDeleteThe only type of person that has a real chance of getting the million is the sort of person who makes whimsically random choices without any rational basis. Now the computer is the one with a ~50% chance of giving away the million.

You can try to identify the Nash Equilibrium of the game. The Nash Equilibrium is such outcome of the game where each player would be worse off (or at least no better off) if he changed his decision.

ReplyDeleteIn this case, you choosing both boxes and the computer choosing not putting anything into the closed box is the Nash Equilibrium of the game since had you chosen just the closed box you'd have gotten nothing and had the computer chosen to put the million into the closed box, it would have failed the prediction. It is the only Nash Equilibrium of the setup so this would be the rational outcome.

However, you could rig the game in your favor: suppose you gave $1001 to your local church with a contract saying - you can keep this money if and only if I choose both boxes in the game. Then the computer would know you would lose by choosing both boxes (as it should know you wouldn't like to donate to a church) and thus would predict you would choose just the closed box - which is exactly what you would do, netting $1.000.000