tag:blogger.com,1999:blog-15005476.post2106957079069604641..comments2023-10-10T08:02:18.073-04:00Comments on Rationally Speaking: It’s not all doom and gloomUnknownnoreply@blogger.comBlogger37125tag:blogger.com,1999:blog-15005476.post-37799834872619230912013-01-27T12:25:05.834-05:002013-01-27T12:25:05.834-05:00"Richard, I disagree with your assumption abo..."Richard, I disagree with your assumption about which configurations would be prohibited. Since the configuration of my brain is apparently not prohibited, it is difficult to see how an exact copy of it would be."<br /><br />But a Boltzmann brain (the "exact copy") interfaces with a wildly chaotic Universe, whereas (presumably) yours interfaces with a strongly patterned Universe. If not prohibited, I would at least say "far more improbable". I think the interface *must* affect the likelihood of a copy existing. This is an underlying assumption of any meaningful epistemology.Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-50922980161841213762013-01-15T19:23:32.267-05:002013-01-15T19:23:32.267-05:00Thanks very much! The objection is much clearer no...Thanks very much! The objection is much clearer now, and it does seem to be a good one.ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-25564923031698741162013-01-15T01:40:21.955-05:002013-01-15T01:40:21.955-05:00"Richard, I disagree with your assumption abo..."Richard, I disagree with your assumption about which configurations would be prohibited. Since the configuration of my brain is apparently not prohibited, it is difficult to see how an exact copy of it would be."<br /><br />I categorized it as a conjecture rather than assumption - more like food for thought than something you can build a testable theory on. The idea behind it, though, is that observations are a reflection of what's on the other side of the brain/Universe interface. A Boltzmann brain would have a boundary with something that is wildly chaotic, and these boundary conditions would have to be reflected inside the brain as well. But we observe a (mostly) orderly Universe. It's a view that meshes with my intuitions well, but as I said, not something that can be easily tested, if at all.<br /><br />"But it seems Boltzmann was talking about an infinite number of universes. Given worlds enough and time, I think they would have to happen."<br /><br />Either that, or an infinite Universe. I was thinking of one of the conjectural possibilities described by Sean Carroll in "From Eternity to Here" -- an infinite high-entropy sea of particles in which periodic fluctuations give rise to regions of low entropy such as Boltzmann brains or, in the more extreme cases, entire local observable Universes such as ours. I don't know if it's warranted to assume that those local fluctuations are completely unconstrained, and the thought of a highly organized entity interfacing directly to a chaotic environment, and internally producing an image of an environment that is not chaotic, seems like a potential violation of constraints.<br />Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-56428007273835767732013-01-14T18:31:19.696-05:002013-01-14T18:31:19.696-05:00Richard, I disagree with your assumption about whi...Richard, I disagree with your assumption about which configurations would be prohibited. Since the configuration of my brain is apparently not prohibited, it is difficult to see how an exact copy of it would be.<br /><br />A problem I saw with a Boltzmann brain as I understood it was it assumed an infinite number of non-zero opportunities for a brain to come together. But calculus teaches me that an infinite series of non-zeroes does not necessarily go to infinity if the terms are decreasing. As entropy increases, wouldn't the likelihood of an appearance of a Boltzmann brain in a given time interval decrease? If that is the case, Boltzmann brains are not inevitable.<br /><br />But it seems Boltzmann was talking about an infinite number of universes. Given worlds enough and time, I think they would have to happen.nihilitwithttps://www.blogger.com/profile/08708093405992711441noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-24646301402662776792013-01-14T09:03:30.727-05:002013-01-14T09:03:30.727-05:00Fascinating. This is the first time I'm learni...Fascinating. This is the first time I'm learning most of these arguments so I might be wrong, but I think you're right.<br /><br />If the mere fact you're alive doesn't tell you anything, the same way the mere fact you're awake doesn't tell you anything in SBP, the fact that you're an early human shouldn't tell you anything either, the same way learning that it's Monday doesn't tell you anything in SBP. Doomsday thinks it does.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15005476.post-35629098869996619092013-01-13T14:16:04.977-05:002013-01-13T14:16:04.977-05:00I'm not sure if Boltzmann brains can be solved...I'm not sure if Boltzmann brains can be solved in the same way as the DA. Sure, the SIA favors the conclusion that we are Boltzmann brains (or rather, that I am a Boltzmann brain, since on that line of reasoning the rest of you are just figments of my Boltzmann imagination) -- assuming Boltzmann brains are not only possible but more common than actual biological brains.<br /><br />I actually have a strong suspicion that Boltzmann brains are physically impossible. In other words, not every configuration of a collection of particles is physically possible -- some are prohibited. And among those that are prohibited are those that include self-aware entities that are not actually connected (in a sensory way) to a physical environment. Of course this is a conjectural solution: I don't think it can be proven or falsified. But then, the existence of Boltzmann brains is probably unprovable and unfalsifiable as well.<br />Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-64818182697913595662013-01-13T02:45:17.199-05:002013-01-13T02:45:17.199-05:00(continuation)
I think the DA is calculating
Pri...(continuation)<br /><br />I think the DA is calculating<br /><br />Prior odds * SSA multiplier * Carter-Leslie multiplier = posterior odds *not* the same as prior odds<br /><br />and I think this is wrong. You can see that this is wrong when you apply it to the SBP, where prior odds are 1:1, the SIA multiplier is 2 (2 observances of tails to 1 of heads), and the Carter-Leslie multiplier is 1/2 (ratio of "populations" -- i.e., events -- between the two alternate histories). The SSA multiplier is 1 as noted above. Both valid approaches I gave above give final odds of 1:1. The DA's approach gives an answer (after Beauty learns that it is Monday) that favors heads by 2:1!<br />Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-22798272423835880582013-01-13T02:45:01.430-05:002013-01-13T02:45:01.430-05:00brainoil: "What bugs me is that when our ques...brainoil: "What bugs me is that when our question is how long into the future human species will exist, why don't we take into account other knowledge available to us, such as what we know about great extinctions, a star's life time etc., and instead rely solely on Doomsday Argument? I thought earlier that this was what you were talking about but I was skipping and had misunderstood."<br /><br />Yes, I think that when you see that any kind of selection bias or whatever that you introduce gets cancelled out by an opposing bias when all of the conditionals are accounted for, the final expectation ends up the same as your prior expectations, which are due to the things you mention.<br /><br />The SBP is a side track which is related to the study of the assumptions leading to the conditionals. In fact if I think about SBP the right way, I can see myself in agreement with the halfers, and then a little more thought swings me back into the thirder camp. What I think is going on is that the SBP is actually two distinct mathematical problems, with different formal statements, and with different correct answers, but when we try to express those problems in English, the language doesn't capture the subtle differences between the two. So they look like the same problem, but as we think about the problem, we get locked into one frame of mind or the other, corresponding to one of the two formal problem statements, so the opposing solution doesn't make sense to us.<br /><br />That said, I think it is valid to apply the SIA in the case of the DA. That biases us toward longer-lived civilizations, because those civilizations produce more observers and so are more likely to produce each of us. Then the Carter-Leslie conditional can be applied and that takes us back to the original prior (before the SIA bias is applied), which is whatever we think it is based on pure empirical results such as frequency of extinctions, life spans of stars, etc.<br /><br />The crux is that the Carter-Leslie conditional multiplier (the ratio of populations of long-lived and short-lived civilizations) is a *correction* for the SIA. If SIA is assumed, then you need the correction. If SSA is assumed, you do not need it.<br /><br />If I got that statement right, then the SBP is actually a strong demonstration of the flaw of the DA. Under the SIA assumption, Beauty arrives at 2:1 odds for tails, because there will be (for her) twice as many "tails observation events" as "heads observation events". She doesn't know if it is Monday or Tuesday. "Monday" corresponds to "low birth number" in the Carter-Leslie DA. When this knowledge is applied to Beauty's 2:1 odds, it renormalizes back to 1:1 which is the original odds of a fair coin flip.<br /><br />If instead you start with the SSA, then in effect all observations of the same event are treated as a single observation, so the relative probabilities of the alternatives are unchanged from the prior. In that case, it doesn't make sense to apply the Carter-Leslie conditional modification, because there is nothing to renormalize. In the case of the SBP: If Beauty takes the halfer position, which corresponds to the SSA, then she immediately gives equal odds to heads and tails. So, learning that it is Monday does not further inform Beauty's estimate of the odds in this case, because Beauty did not treat her potential Monday and Tuesday observations in the tails history as two separate observations.<br /><br />In summary, we have two approaches that give the same answer:<br /><br />Prior odds * SIA multiplier * Carter-Leslie multiplier = Posterior odds same as prior odds<br /><br />or<br /><br />Prior odds * SSA multiplier = Posterior odds same as prior<br /><br />So the SSA multiplier is 1.<br /><br />In other words you can start with either assumption (SIA or SSA), but you must choose the next step carefully to get the right answer. SIA and SSA are just different ways of thinking about the problem, one of which entails an additional step to reach the true odds, while the other does not.<br /><br />(continued in next post)<br />Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-14163596132445791372013-01-13T01:37:40.298-05:002013-01-13T01:37:40.298-05:00Great article. I think you are right to have your ...Great article. I think you are right to have your existential intuitions in #6. I hope you keep pursuing them. I think #6 is also the root of the problem with Boltzmann Brains. And it might have implications for Thermodynamics. Smuggled into the Anthropic Reasoning process is the notion that "finding oneself" at one point in time is a variable, that it has import to be here now rather than somewhere else at another time. But now has always stood outside of probability. Probability is about the future, not the present. Probability wave functions disappear in the now. No matter how unlikely the now can be, it immediately becomes the prior. "You happen to be winning the lottery now" is less likely than "You happen to be dreaming you have won the lottery right now" but people win the lottery every day. Aaron Shurehttps://www.blogger.com/profile/00837439765332783167noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-12651896077950690592013-01-12T22:03:51.707-05:002013-01-12T22:03:51.707-05:00In other words if it was 100 copies of Bostrums in...In other words if it was 100 copies of Bostrums instead of beauties, and the experimenters killed a person every time a wrong answer was made, Bostrum would end up killing a lot of people.<br /><br />I think what leads at least some halfers astray is the intuition that a fair coin's 1/2 probability is intrinsic to it. They say stuff like "1/1,000,000??? That's a lot of confidence when the coin isn't even tossed yet."<br /><br />What bugs me is that when our question is how long into the future human species will exist, why don't we take into account other knowledge available to us, such as what we know about great extinctions, a star's life time etc., and instead rely solely on Doomsday Argument? I thought earlier that this was what you were talking about but I was skipping and had misunderstood.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15005476.post-17020645381722533332013-01-11T23:08:01.776-05:002013-01-11T23:08:01.776-05:00Sean, that's an interesting variation. I have ...Sean, that's an interesting variation. I have a final (maybe) variation on the SBP. Maybe it will convince the halfers. Maybe not. Here goes anyway: Suppose you do the experiment with 100 beauties. Forget the amnesia and the Monday/Tuesday stuff. After the beauties are all sleeping, a coin is flipped. If heads, then one beauty is selected at random. She is awakened asked what her belief is about the coin toss. If tails, then two beauties are selected at random. They are awakened (but kept out of each other's presence, so neither knows there is another beauty awake) and asked what their beliefs are about the coin toss.<br /><br />From the point of view of a Bayesian beauty who is awakened and interviewed, what is the probability that the coin landed heads?<br />Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-1227397487236222542013-01-11T18:07:01.167-05:002013-01-11T18:07:01.167-05:00Makes senseMakes sensechbieckhttps://www.blogger.com/profile/11038854944875543524noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-69881969070239810962013-01-11T11:24:16.399-05:002013-01-11T11:24:16.399-05:00Interesting to see that the objection I was fuzzil...Interesting to see that the objection I was fuzzily waving at has come up a lot here. If I was unaware of my birth rank, then per the SIA, the fact that I am born at all weighs in favor of DoomLate, since more humans are born in that scenario.<br /><br />DA pretends to weigh in favor of DoomSoon, but it assumes that birth rank is random, whereas in fact the only reason that we formulated DoomLate and DoomSoon in this way is *because* we live before DoomSoon. Anyone trying to use the DA to predict the future has a 0% chance of living after DoomSoon happens (however they define DoomSoon). Therefore there is an equal number of people to use the DA in each scenario, and they have the same birth ranks, so the fact that I am using DA and have a given birth rank gives me no information to distinguish the two.<br /><br />So what I would say is that DA is invalid as a whole, and the SIA is valid, but gives no information that further predicts the future once I know my birth rank (like knowing that I am in cell 7 in the above example from Richard). If I did not know my birth rank, the SIA would argue in favor of realities with very many people. (Some variant may still argue in favor of the existence of aliens, since my birthday rank amongst all intelligent life is unknown!)<br /><br />One more twist on the Sleeping Beauty problem. What if, instead of flipping a coin, they woke Sleeping Beauty three times, once with "Heads" written on a sign behind her, and twice with "Tails" written on the sign, and she has to say what probability it is that the sign says "Heads" this time. Is the probability 1/3? The only difference between this and the other example is that this happens in three trials in one world, whereas the original splits the everything into two possible worlds. Sean (quantheory)https://www.blogger.com/profile/00094694851707164734noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-78088012784653706862013-01-11T04:00:59.426-05:002013-01-11T04:00:59.426-05:00For the benefit of those who have problems to acce...For the benefit of those who have problems to access Dieks article, I thought I give a more precise description using the example Ian provides in the text. For ease of presentation I will only investigate the claim that DoomSoon is true given that I am informed that my birthrank is < than 200b (instead of 60b). The argument can be readily modified however.<br /><br />Ian assumes that P(DoomSoon)/P(DoomLate)= p_S / p_L = 1 / 50 and that we should use these estimates as our Bayesean priors since we did not incorporate knowledge about our birthrank. <br /><br />Dieks argues that this is flawed and instead that the values p_L and p_S already reflect knowledge of our birthrank since we would only assume them as probabilities for the hypotheses GIVEN that our birthrank is < 200b. In other words Ian has so far only given us P(DoomLate|R<200b) and P(DoomSoon|R<200b).<br /><br />If we are really to imagine that we could have randomly appeared at any time in history having any of the 200 trillion birthranks (if DoomLate is true), then we need to consider the possibility that we have a rank R>200b. But P(DoomLate|R>200b)=1 and P(DoomSoon|R>200b)=0. <br /><br />The actual probability for DoomSoon is thus P(DoomSoon) = P(DoomSoon|R<200b) * P(R<200b) + P(DoomSoon|R>200b) * P(R>200b) = P_S * P(R<200b) + 0 * (P>200b).<br /><br />The chance that I will actually be having a birthrank lower than 200b is the sum of two possibilities and crucially depends on whether DoomLate or DoomSoon is true. In case of DoomSoon I will certainly find myself among the first 200b citizens and in case of DoomLate my chance of finding me among the first 200b is quite low q = 200b/200tr.<br /><br />It follows that P(R<200b)= P(DoomSoon) * 1 + P(DoomLate) * q. Inserting into the above equation we get:<br /><br />P(DoomSoon)= p_S * P(DoomSoon) + p_S * P(DoomLate) * q <=><br />(1-p_S) * P(DoomSoon) = p_S * P(DoomLate) * q <=><br />P(DoomSoon) = p_S * P(DoomLate) * q / (1-p_S) <=><br />P(DoomSoon)/P(DoomLate) = p_S * q / (1-p_S).<br /><br />In other words the relationship between the "real" priors is:<br />P(DoomSoon)/P(DoomLate) = p_S/p_L * 200b/200tr and therefore P(DoomSoon) is much smaller without knowledge of my birthrank.<br /><br />Once my birthrank is told to me (and it is indeed R<200b) I will update my priors and conclude that P(DoomSoon|R<200b)/P(DoomLate|R<200b) = p_S/p_L and sanity is restored.<br /><br />I hope that helps! MiraMaxhttps://www.blogger.com/profile/11104525150495596792noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-88865593110654362802013-01-11T02:25:06.975-05:002013-01-11T02:25:06.975-05:00The BSP is subjective and is based in a simple que...The BSP is subjective and is based in a simple question;If in one experiment I guarantee you'll answer the question(s) correctly, which is better: one heads or two tails? <br />If you think is better two tails because they are two questions and heads is only one, then the belief is actually 1/3.<br />But in my opinion they have the same value. If the coin came up heads and you'd get it right then the game is finished, but if came up tails then, suppose there is no amnesia, you get tails right TWICE and the game is finished.<br /><br />So in the long run, if you stick to tails you'll always have the double of correct answers than playing always to heads, and there is no need of brainwashing to see that is not a fair game. For that reason I think 1 heads and 2 tails should have the same value and the belief in this case is 1/2.<br />Greyllhttps://www.blogger.com/profile/01638014535013250704noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-912618335126137822013-01-11T01:54:59.816-05:002013-01-11T01:54:59.816-05:00Chris, I think Bostrom's primer fails at step ...Chris, I think Bostrom's primer fails at step 2. See my post from Dec. 6 (1:03 AM) under this article: http://www.rationallyspeaking.blogspot.com/2012/11/odds-again-bayes-made-usable.html#comment-form<br /><br />There I describe two thought experiments, let's call them TE1 and TE2. At first glance they seem equivalent, but on closer analysis we see that they are not.<br /><br />Bostrum is assuming that his thought experiment is similar to my TE1. That is, God's flip of the coin is like my random selection of one of the two urns in TE1. I contend that Bostrum's experiment is analogous to TE2. To see why this is so, consider the possibility that God conducts the experiment 1000 times. Then, we expect that 500 times he will create 10 individuals, and 500 times he will create 100 individuals. I try to guess whether I am in one of the 500 instances of the first kind or one of the 500 instances of the second kind, on learning that I am in cell 7. God created a total of 55000 individuals in the 1000 trials. 50000 of those were in trials that created 100 individuals each, and 5000 were in trials that only created 10 individuals. So before I know my cell number, I have reason to believe it more likely that I am in my former group, at 10:1 odds -- this is my prior. When I adjust my prior based on knowing I'm in one of the first 10 cells (which is 10x more likely if I'm in the second group), I arrive at even odds. And that makes sense, because I know that 1000 individuals were/will be created in cell 7, and 500 of those were in the first group, 500 in the second group. That reasoning is in line with TE2; populating individuals into cells resembles handing everyone entering a room an envelope with a number on it and an "A" or "B" card inside. The number on the envelope corresponds to the number on the cell, and the card inside is indicating whether the recipient is a member of a small or a large population.<br /><br />I can't see any reason to modify my reasoning if the number of trials is reduced to 100, or to 5, or to 1.<br />Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-77586611643316908022013-01-11T00:35:03.702-05:002013-01-11T00:35:03.702-05:00Tom: Yes, I've read about Bertrand's parad...Tom: Yes, I've read about Bertrand's paradox. I think it arises because there is no single way to choose a chord at random (choose two points on the circumference and connect them; choose two points in the interior and connect them; choose a point in the interior and an angle; and so on). Probabilities for sets of reals are defined using measures on the sets we are interested in. For example, the probability of hitting a bullseye on a target with a random, unaimed arrow (given that you at least hit the target) is the ratio of the area of the bullseye to the area of the target. In the chords case, the measure varies with the selection method. A simpler analogy would be the probability of rolling a 7 with two six-sided dice, compared to the probability of rolling a 7 with 3 4-sided dice.<br /><br />As for the method of determining future lifespans based on lifespan to date, I think you are referring to Gott's delta-t argument, which is described in the "literature review" link that Ian posted at the end of the article. I actually heard about this argument before I heard about the Carter-Leslie argument, and initially thought it might have some very limited validity. But Carlton Caves published a refutation (http://arxiv.org/abs/astro-ph/0001414 -- warning, contains calculus) which, after the second or third reading, made sense to me. Essentially, Caves showed that a proper Bayesian derivation of Gott's argument implies that the "temporal Copernican principle" Gott relies on is only valid when you do not know how long the phenomenon has lasted (you are no longer J. Random Observer, you are Observer Present at Age X). But you need the temporal Copernican principle for the formula to be valid, and you need the age of the phenomenon to apply the formula. Caves showed that the actual posterior probability distribution of total duration upon learning the age of the phenomenon is equal to the prior distribution, but truncated to the left of the current age (we rule out total durations less than the current age, so the probability goes to zero for that interval), and renormalized so that the integral over all possible remaining durations is 1. And this result agrees with intuition (okay, it agrees with mine, at least).<br />Richardhttps://www.blogger.com/profile/10042619745483254124noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-89945372279354255162013-01-10T22:46:11.374-05:002013-01-10T22:46:11.374-05:00The sampling is not instant, rather it is continuo...The sampling is not instant, rather it is continuous. At any time, the only people available to consider this problem are those who are currently alive, and they are subject to the anthropic principle.<br /><br />The appropriate analogy in the Blam-O example goes something like this:<br /> Your agents have in fact found an unknown number of fireworks. Since they are all practical jokers, they insist on first giving you the lowest serial number found among all the fireworks. The lowest serial number is 112. Estimate the number of fireworks.nihilitwithttps://www.blogger.com/profile/08708093405992711441noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-26460913461914685522013-01-10T19:49:04.197-05:002013-01-10T19:49:04.197-05:00Reading Bostrom's primer, my initial reaction ...Reading Bostrom's primer, my initial reaction was that step I and II are fine, but III does not follow as the doors are temporally simultaneous whereas we are not. Also, why is Step I described as using the self-sampling assumption? SIA (as defined in Wikipedia) would arrive at the same result.<br /><br />Third point (or rather question): not sure about the math, but even with the given probilitiy assumptions, aren't there infinite combinations of DoomSoon / DoomLate where DS1, DS2... all have the same odds over DL1, DL2...? I.e. even if the DA is correct, it only shows that Doom will happen sooner rather than later, but not that it will happen soon in absolute terms. (I.e. P(DS2)=P(DL1) etc.)<br /><br />Cheers<br />Chrischbieckhttps://www.blogger.com/profile/11038854944875543524noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-12624606833309793542013-01-10T18:31:12.800-05:002013-01-10T18:31:12.800-05:00I am a bit confused as to why 1/2 would be an answ...I am a bit confused as to why 1/2 would be an answer to consider at all. Isn't the SBP equivalent to Monty Hall? In both cases, the experimenter knows sth extra and acts upon it, and the subjects knows that. In MH, it is the door that is opened that is guaranteed not to have a car behind it, in SBP it is the fact that only tails will induce another day of sleep.chbieckhttps://www.blogger.com/profile/11038854944875543524noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-21041189332539683812013-01-10T13:11:22.972-05:002013-01-10T13:11:22.972-05:00Sorry, for my inapt summary. Dieks's uses a ti...Sorry, for my inapt summary. Dieks's uses a time-based formulation, but I think that his argument runs equally well on birth-ranks. He proposes that the mistake "doom-sayers" make is based on their confidence that the probabilities we currently devise for a doomsday event should be taken as our bayesean priors for this event before knowing which birthrank we actually have.<br /><br />But this seems flawed on its face since if I knew my birthrank to be higher than the "60 billionth" I would know fur sure that "DoomSoon" is incorrect. In other words in our "common sense" probabilities about doomsday at rank X, we have already smuggled in our knowledge that our birthrank is actually lower than X. In order to get our priors before we know our birthrank we need to consider the possibility that knowledge of our birthrank alone invalidates DoomSoon.<br /><br />Hmmm ... not sure whether this was any clearer. Anyway the article on the site I linked is a preprint tex file, so you should be able to read it with any text editor.MiraMaxhttps://www.blogger.com/profile/11104525150495596792noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-16171367629118962042013-01-10T12:50:38.712-05:002013-01-10T12:50:38.712-05:00Thanks! I had not heard of this argument.
>In ...Thanks! I had not heard of this argument.<br /><br />>In a nutshell he argues that the hypothesis "Doomsday will be soon" is malposed in a Baysean analysis since "soon" already implies the current position of the observer in time.<br /><br />I'm not sure I follow, just based on your comment (the paper is not in a format I can read here; I'll try to access later). "DoomSoon" is just a nickname for the hypothesis; its actual content is "humanity will last until the 60 billionth birth rank," which is not contaminated with anything indexical.<br /><br />>If that is the case then the priors of this hypothesis and its negation are much different than our actual priors now, since before we know our position in time (or our birth rank) we could live anytime (even after date x if the hypothesis is false). The upshot is that once we take our position in time (or our brithrank) into account the priors are adjusted such that they fall back to the same probabilities we would devise for these hypothesis on the basis of our scientific insights up to now.<br /><br />I wasn't able to follow this. If you're in the mood, spelling it out would be great; if not, I'll try to read that paper soon.ianpollockhttps://www.blogger.com/profile/15579140807988796286noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-67475502464671249782013-01-10T12:42:45.153-05:002013-01-10T12:42:45.153-05:00As Richard suggested, I believe that Sleeping Beau...As Richard suggested, I believe that Sleeping Beauty *should* precommit to answering 1/3. This is not to say that she believes that the chance of heads is 1/3. Rather, it is to say that when she is woken up, the chance that it will be after heads is 1/3, so finding herself awake is evidence against heads (even though she will always experience it at least once). <br /><br />Being woken up doesn't give her new information, but she knows beforehand that the experiment is biased to make "tails" the right answer more often than heads.<br /><br />Think of the Monty Hall problem. You know that switching is the best strategy before the game starts, because doing so allows you to systematically exploit information that the *host* has, but you don't.<br /><br />As for my opinion of the SIA and DA, I've grown a bit more comfortable with it, but it has taken me a while to think through the very large number of variants and analogies.<br /><br />The Shooting Room is interesting. I'm tempted to think that the answer depends on whether you magically know ahead of time that the experiment will last long enough to use you. It's very close to the DA. Sean (quantheory)https://www.blogger.com/profile/00094694851707164734noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-25978257994635674802013-01-10T11:58:40.610-05:002013-01-10T11:58:40.610-05:00I would argue that one distinction has to do with ...I would argue that one distinction has to do with the fact that all cells are used, whereas in the DA, not all possible people are born. If the very fact that you are being used in the experiment changes the odds, a simple count of the various outcomes is no longer the best you can do.<br /><br />Similarly I could argue that the observation that I exist with birth rank X is not a matter of "finding" myself in arbitrary position X, but rather the observation that I exist. That is, I could not have had a very different birth rank and still existed, so the fact of my existence is an observation of the fact that this "me" exists, rather than the human race having died out years ago, and thus *increases* the expected lifespan of humanity. This is a way of sneaking off to appeal to the SIA.Sean (quantheory)https://www.blogger.com/profile/00094694851707164734noreply@blogger.comtag:blogger.com,1999:blog-15005476.post-77125575441822429812013-01-10T10:10:10.168-05:002013-01-10T10:10:10.168-05:00I think Dieks's refutation is on the money reg...I think Dieks's refutation is on the money regarding the doomsday problem: http://philsci-archive.pitt.edu/2144/ <br /><br />In a nutshell he argues that the hypothesis "Doomsday will be soon" is malposed in a Baysean analysis since "soon" already implies the current position of the observer in time. Instead a reasonable hypothesis would be "Doomsday will be before date X". If that is the case then the priors of this hypothesis and its negation are much different than our actual priors now, since before we know our position in time (or our birth rank) we could live anytime (even after date x if the hypothesis is false). The upshot is that once we take our position in time (or our brithrank) into account the priors are adjusted such that they fall back to the same probabilities we would devise for these hypothesis on the basis of our scientific insights up to now.<br /><br />This also neatly dissolves the sleeping beauty problem, so I am a bit puzzled why his argument is not mentioned here. Pisaturo argues along similar lines (Pisaturo, R. 2009. Past longevity as evidence for the future. Philosophy of Science 76: 73–100.)MiraMaxhttps://www.blogger.com/profile/11104525150495596792noreply@blogger.com