by Ian Pollock
www2.isye.gatech.edu/ |
[Related (and much more in-depth): Fifteen Arguments Against Finite Frequentism; Fifteen Arguments Against Hypothetical Frequentism; Frequentist versus Subjective View of Uncertainty.]
Stop me if you’ve heard this before: suppose I flip a coin, right now. I am not giving you any other information. What odds (or probability, if you prefer) do you assign that it will come up heads?
If you would happily say “Even” or “1 to 1” or “Fifty-fifty” or “probability 50%” — and you’re clear on WHY you would say this — then this post is not aimed at you, although it may pleasantly confirm your preexisting opinions as a Bayesian on probability. Bayesians, broadly, consider probability to be a measure of their state of knowledge about some proposition, so that different people with different knowledge may correctly quote different probabilities for the same proposition.
If you would say something along the lines of “The question is meaningless; probability only has meaning as the many-trials limit of frequency in a random experiment,” or perhaps “50%, but only given that a fair coin and fair flipping procedure is being used,” this post is aimed at you. I intend to try to talk you out of your Frequentist view; the view that probability exists out there and is an objective property of certain physical systems, which we humans, merely fallibly, measure.
My broader aim is therefore to argue that “chance” is always and everywhere subjective — a result of the limitations of minds — rather than objective in the sense of actually existing in the outside world.
Random Experiments
What, exactly, is a random experiment?
The canonical example from every textbook is a coin flip that uses a fair coin and has a fair flipping procedure. “Fair coin” means, in effect, that the coin is not weighted or tampered with in such a way as to make it tend to land, say, tails. In this particular case, we can say a coin is fair if it is approximately cylindrical and has approximately uniform density.
How about a fair flipping procedure? Well, suppose that I were to flip a coin such that it made only one rotation, then landed in my hand again. That would be an unfair flipping procedure. A fair flipping procedure is not like that, in the sense that it’s … unpredictable? Sure, let’s go with that. (Feel free to try to formalize that idea in a non question-begging way, if you wish.)
Given these conditions, frequentists are usually comfortable talking about the probability of heads as being synonymous with the long-run frequency of heads, or sometimes the limit, as the number of trials approaches infinity, of the ratio of trials that come up heads to all trials. They are definitely not comfortable with talking about the probability of a single event — for example, the probability that Eugene will be late for work today. Will Feller said: “There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an (idealized) model which would presumably run along the lines ‘out of infinitely many worlds one is selected at random...’ Little imagination is required to construct such a model, but it appears both uninteresting and meaningless.”
The first, rather practical problem with this is that it excludes altogether many interesting questions to which the word “probability” would seem prima facie to apply. For example, I might wish to know the likelihood of a certain accident’s occurrance in an industrial process — an accident that has not occurred before. It seems that we are asking a real question when we ask how likely this is, and it seems we can reason about this likelihood mathematically. Why refuse to countenance that as a question of probability?
The second, much deeper problem is as follows (going back to coin flipping as an example): the fairness (i.e., unpredictability) of the flipping procedure is subjective — it depends on the state of knowledge of the person assigning probabilities. Some magicians, for example, are able to exert pretty good control over the outcome of a coin toss with a fairly large number of rotations, if they so choose. Let us suppose, for the sake of argument, that the substance of their trick has something to do with whether the coin starts out heads or tails before the flip. If so, then somebody who knows the magicians’ trick may be able to predict the outcome of a coin flip I am performing with decent accuracy — perhaps not 100%, but maybe 55 or 60%. Suppose that a person versed in such tricks is watching me perform what I think is a fair flipping procedure. That person actually knows, with better than chance accuracy, the outcome of each flip. Is it still a “fair flipping procedure?”
This problem is made even clearer by indulging in a little bit of thought experimentation. In principle, no matter how complicated I make the flipping procedure, a godlike Laplacian Calculator who sees every particle in the universe and can compute their past, present and future trajectories will always be able to predict the outcome of every coin flip with probability ~1. To such an entity, a “fair flipping procedure” is ridiculous — just compute the trajectories and you know the outcome!
Generalizing away from the coin flipping example, we can see that so-called “random experiments” are always less random for some agents than for others (and at a bare minimum, they are not random at all for the Laplacian Calculator), which undermines the supposedly objective basis of frequentism.
The apparent saviour: Argumentum ad quantum
“Ah!” you say, laughing now. “But you, my friend, are assuming determinism is true, whereas Quantum Mechanics has proven that determinism is false — objective chance exists! So your Laplacian Calculator objection fails altogether, being impossible, and the coin example fails if quantum randomness is involved, which it might be.”
Let’s review the reasons why Quantum Mechanics is held to imply the existence of objective chance (i.e., chance that does not depend on states of knowledge).
There are a variety of experiments that can be performed in order to bring out this intuition, but the simplest by far is simply to take a piece of radioactive material — say, uranium — and point a Geiger counter at it. The Geiger counter works because passing radiation makes an ionized (hence, electrically conductive) path in a gas between two electrodes that are at different voltages from each other. Electrical current flows through that ionized path, into a speaker, and makes a clicking sound, signaling the passage of an ionizing particle.
The ionizing radiation itself is caused by the radioactive decay of elements with unstable nuclei — for example, uranium. Uranium-238 has a half-life of 4.5 billion years, which means that in a given sample of U-238, about half of the atoms will have decayed after 4.5 billion years. However — and here is where the supposed objective randomness comes in — one can never say for any given atom when it will decay: now, or next Tuesday, or in 10 billion years. All one can say is that one would give odds of 1:1 (50% probability) that it will decay in the next 4.5 billion years (unfortunately, collecting on that bet is somewhat impractical).
For reasons that I do not have space to get into, the idea that the U-238 atom has some sort of physical “hidden variable” that would determine the time of its decay, if only we could measure this variable, is pretty much ruled out (by something called Bell’s theorem). So prima facie, it appears that nature itself is characterized by objective, irreducible randomness — therefore, chance is objective, therefore frequentism is saved!
There are at least two problems with this argument.
One is that the interpretation of the quantum mechanical experiments outlined above as exhibiting “objective chance” is highly controversial, although it is true that the above is the orthodox interpretation of QM (mainly due to historical accident, it must be said). Yudkowsky has done an excellent job of arguing for the fully deterministic Many Worlds Interpretation of QM, and so has Gary Drescher in “Good and Real,” so I am not going to try to recapitulate it. In essence, all you need in order to reject the standard interpretation above (usually called the Copenhagen interpretation) is to properly apply Ockham’s razor and guard against mind-brain dualism.
The most important problem with the argument that QM rescues frequentism, however, is that even given the existence of objective chance for the reasons outlined above, experiments that are actually characterized by such quantum randomness in anything higher than the tenth decimal place, are incredibly rare.* In other words, even if we accept the quantum argument, this only rescues a very few experiments — essentially the experiments done by quantum physicists themselves — as being objectively random.
It’s worth clarifying what this means. It does not mean that QM doesn’t apply to everyday situations; on the contrary, QM (or whatever complete theory physicists eventually come up with) is supposed to apply without exception, always & everywhere. No, the issue is rather that for macro-scale systems, Quantum Mechanics is well-approximated by classical physics — for example, fully deterministic Newtonian mechanics and fully deterministic electromagnetic theory (that’s why those theories were developed, after all!). Macro-scale systems, in other words, are almost all effectively deterministic, even given the existence and (small) influence of quantum indeterminacy. This definitely applies to something as flatly Newtonian as a coin toss, or a roulette wheel, or a spinning die — let alone statistics in the social sciences.**
So, unless frequentists wish to have their interpretation of probability apply ONLY to the experiments of quantum mechanics — and even that only arguably — they had better revise their philosophical views of the nature of probability.
Chaos theory?
Another common way to try to sidestep the problems of frequentism is to say that many physical systems are truly unpredictable, according to chaos theory. Take, for example, the weather — surely, this is a chaotic system! If a butterfly flaps its wings there, why, we might have a tornado here.
The trouble is that chaos theory openly acknowledges its status as a phenomenon of states of knowledge rather than ontology. Strictly speaking, chaos refers to the fact that many in-principle deterministic systems are, in practice, unpredictable, because a tiny change in initial conditions — perhaps too small a change to measure — makes the difference between two or more highly distinct outcomes. The important point is that higher resolving power (better knowledge) reduces the chaotic nature of these phenomena (i.e., reduces the number of possibilities), while never eliminating chaos completely. Hence, although chaos theory chastens us by revealing that the prediction of many deterministic physical systems is a fool’s errand, our view of chance remains essentially the same — chance is still subjective in the sense of depending sensitively on our state of knowledge. The weather is a chaotic system, but it is less chaotic to a good weatherman than to an accountant!
Some additional thoughts
Let’s return to the concept of objective chance. What does this phrase even MEAN, exactly?
Going back to the example of radioactive decay, how does the radioactive atom “decide” to decay at one time as opposed to another? We have already agreed that it does not have any physical guts which determine that it will decay at 15:34 on May 13, 2012. So are we saying that its decay is uncaused? Very well, but why is it un-caused to decay on May 13 as opposed to May 14? Your insistence that the decay is “objectively random” doesn’t seem to remove my confusion on this score.
The basic problem is that there is just no way of defining or thinking about “randomness” without reference to some entity that is trying to predict things. Please go ahead and attempt to do so! Imagine a universe with no conscious beings, then try to define “randomness” in terms that do not reference “knowing” or “predicting” or “calculating” or whatever.
Don’t get me wrong, I’m okay with the idea of unpredictability even in principle. Goodness knows, humans have epistemic limits, and some are doubtless insuperable forever. The problem is that unpredictability, randomness, chaos — whatever you want to call it, however unavoidable it is — is a map-level phenomenon, not a territory-level phenomenon.
So sure, you can tell me that there are some things that exist, but are un-mappable even in principle. But please don’t tell me that un-mappability itself is out there in the territory — that’s just flat out insane! You’re turning your own ignorance into ontology!
... And yet, that is exactly what the standard interpretations of QM say. But one person’s modus ponens is another’s modus tollens. Some would say “Standard interpretations of QM imply objective chance, therefore objective chance exists.” But it’s also possible to say “Standard interpretations of QM imply objective chance, but objective chance is gibberish, therefore the standard QM interpretations are incorrect.” ***
Hey, Einstein agreed with me, so I MUST be right!
To summarize:
> Frequentism would seem to be completely undermined by the fact that uncertainty is always subjective (i.e., less uncertain for agents with more knowledge).
> Quantum mechanics appears to offer an ‘out,’ by apparently endorsing objective chance against determinism.
> However, such an interpretation of QM is highly controversial.
> Also, even if that interpretation were accepted, the universe would still be deterministic enough to eliminate most of the classic questions of probability from a frequentist framework, making frequentism correct but almost useless.
> When considered carefully, the entire concept of ‘objective chance’ is highly suspicious, since (from a philosophical point of view) it turns epistemic limits into novel ontology, and (from a scientific point of view) it makes basic physics (e.g., radioactive decay) dependent on terminology (“unpredictable,” etc.) that is unavoidably mental in character.
_______________________
Footnotes:
* Okay, I admit it. I don’t actually know which decimal place. But it’s definitely not the first, second, or third.
* Okay, I admit it. I don’t actually know which decimal place. But it’s definitely not the first, second, or third.
** Likewise for free will. Even if objective chance would rescue contracausal free will (it wouldn’t), and even if objective chance actually existed (don’t take that bet!), the universe just would not be objectively chancy enough to make everyday decisions non-deterministic. Good thing free will isn’t undermined by determinism after all!
*** I wish I could give you the answer to the radioactive decay riddle now, but it’s not going to fit in one post — not even close.
Does defining a random event as follows meet your challenge?
ReplyDeleteRandom event: An event that if repeated would produce a sequence with no possible pattern
Massimo
ReplyDelete>"Good thing free will isn’t undermined by determinism after all!"
The only way some philosophers have been able to maintain that is to change the the discussion....by changing the meaning of words. (free will). For some it now means simply "you can act on your desires"...This is simply an "escape" from the original problem of free will.
DJD: It's not Massimo; the post's author is Ian Pollock (though the text was posted by this site's owner, Massimo).
ReplyDeleteHector M.
ReplyDeleteThanks. I saw the posted by Massimo at the bottom and concluded that he wrote those comments just below the dividing line.Thanks again
Ian
ReplyDelete>"Good thing free will isn’t undermined by determinism after all!"
The only way some philosophers have been able to maintain that is to change the the discussion....by changing the meaning of words. (free will). For some it now means simply "you can act on your desires"...This is simply an "escape" from the original problem of free will.
Thus probability is subjective, based on states of knowledge. But how are these subjective states of knowledge attained, I ask. It is true that with increased knowledge I can make a better assessment of the odds of an event, and in that sense my assessment of probability is a subjective phenomenon. Two different persons, with different degrees of knowledge, may assess the probability differently. But how do I come to have more or less knowledge, say, on the likelihood that tomorrow will rain, or that "most coins" give tails 50% of the time? That idea may be in my head, essentially, for three (not mutually exclusive) reasons:
ReplyDelete1. Frequency. Id est repeated observations of the same phenomenon leading to a frequentist assessment of probability. For instance, flipping many coins has yielded the conclusion that coins fall on heads (or tails) about 50% of tje time. Frequencies observed in the past, with this coin or other coins, or many other similar objects like dishes or sheets or paper, are the basis for affirming a probability of 0.50. Of course, my statement to that effect is subjective in the sense of reflecting my state of mind, but it is also grounded on objective facts just like my judgment that the US have been a Republic for the last 230-odd years, or that nights use to be warm in summer, or that droughts in Somalia happen on average every 3-4 years.
2. Theory. This basis for probability relies on some internal mechanism, captured by the theory, accounting for the variability of results over time and their expected distribution. For instance, a physical theory of the probable trajectories of flat cylindrical bodies, like coins, may predict that they are likely to fall on each side 50% of the time, the exact outcome depending on small difference in initial conditions (e.g. regarding the person or machine doing the flipping, or very small variation in each coin's smoothness or density). A more sophisticated physicist may predict the chances of catching a neutrino from outer space in an observatory established at some deep underground lake. Of course, a sane scientist will not be satisfied with theory alone, and will go for empirical confirmation, but the theory told him already what to expect on average, although unable to tell the outcome of every individual instance of the phenomenon in question (such as the exact time the next neutrino will be observed).
3. Whim. An idea about the probability of something may be in my head with no rational basis altogether. I just made up the probabilities in my own head, for no (rational) reason. E.g., I have this feeling that on Thursday mornings I am luckier, or that coins obey my desires if I desire really strongly; so I bet that this coin has a great change of falling heads because it is a Thursday morning, or because I have wished it very strongly. This would be subjective probability in its most extreme version. Perhaps my apparently whimsical idea has actually some grounds: perhaps my brain is synthesizing a number of unconnected pieces of information and yielding a folk-expert judgment even if I am unaware of the mental process involved. But this type of "abduction" (to use Peirce's term in a slightly different context) has just a heuristic validity unless reduced somewhat to the other two categories. Forensic analysis of folk expertise usually explains the reasons for those judgment, or for their long-term accuracy.
(to be continued)
(continued from previous comment)
ReplyDeleteAs implied by the above enumeration, scientific probabilities belong to the first and/or second classes. Mere whimsical subjectivity is no serious basis for any probabilistic assessment.
One has also to distinguish between statements of probability concerning single events from those concerning populations of events. I may assign a high likelihood to an event (such as that tomorrow will be rainy), but no actual outcome (raining or sunny) may legitimately refute my statement: rain may have been extremely likely even if tomorrow turns out to be sunny after all: what one ACTUALLY means with a meteorological forecast is that over a great many days like today, in MOST cases (i.e. with high frequency) the next day was rainy. The meteorologist may also introduce theory, in the shape of detailed satellite and ground-based observations in recent days, coupled with some complex atmospheric model yielding such prediction. The prediction cannot be falsified by observing tomorrow's weather, but it can be falsified by collecting a relevant dataset (covering many occasions and places) and testing the hypothesis empirically.
Another useful distinction is between the statement of a probability and the adoption of a certain decision or course of action on the basis of such statement. For instance, long and abundant experience may suggest that a student with low SAT scores is not likely to do very well in college, and this judgment may be used by a college administrator to refuse admission. However, the student in question may be another Einstein, unbeknownst (for the time being) to the administrator and even to the student herself. In the meteorological example above, one may have to DECIDE what to wear tomorrow, e.g. taking or not an umbrella, and a prudent person would (albeit skeptically) look up the meteorological forecast and decide to carry the umbrella if the forecast says the chances of rain are high. The decision may ultimately be wrong as regards tomorrow (if the day turns out to be sunny after all) but IN THE LONG RUN (i.e. over very many similar days) it would probably be right to take the umbrella on such kind of occasions, to minimize the number of occasions in which one gets drenched to the bone.
>"Good thing free will isn’t undermined by determinism after all!"
ReplyDeleteI wonder: what would happen if some cherished notion of ours is someday destroyed by some scientific discovery. It might happen that some such discovery shows conclusively that the notion of free will is nonsensical and free will illusory. It would be "a bad thing" if such event occurs? Or a good thing just to ignore it, or perhaps burning the guilty scientist at the stake?
It has happened, you know. Copernicus and Galileo did shaken the whole edifice of the Christian vision of the universe (as very well explained by Thomas S. Kuhn in his --in my opinion-- best book, The Copernican Revolution, much superior to his later epistemological musings on incommensurable paradigms and suchlike). Were Copernicus and Galileo bad news? Was Darwin "bad" for the cherished notion that God made us at his image?
It may be comforting to learn that a particular piece of scientific knowledge does not imply the demise of some of our cherished beliefs, but that has nothing to do with the validity of such piece of scientific knowledge. I would be very glad the day I see a scientist following his discoveries even against his most cherished beliefs, and I strongly advise to ruthlessly criticize and double-check any conclusion that does coincide with the scientist's values and beliefs.
End of off-topic digression.
@DJD: I disagree. Extensively, when people say they have free will, they are referring to the fact that they can make meaningful choices - which is perfectly true given determinism. However, it is also true that many people are confused about what kinds of entities can make meaningful choices. In other words, people's intuition of freedom is correct and deserves to be called free will, even if they're confused philosophically about the issue.
ReplyDeleteAs an analogy, Aristotle did not fail to be meaningful when he spoke of water, even though he erred in thinking it was an element.
"In principle, no matter how complicated I make the flipping procedure, a godlike Laplacian Calculator who sees every particle in the universe and can compute their past, present and future trajectories will always be able to predict the outcome of every coin flip with probability ~1."
ReplyDeleteIt seems like you're assuming your conclusion here. A Laplacian Calculator could only exist in a world where chance does not exist in the territory, and to imagine the existence of a Laplacian Calculator is to create a map, not to define the territory. One could just as easily imagine that a Laplacian calculator could not work, because chance exists within the territory.
Ian
ReplyDelete>" I disagree. Extensively, when people say they have free will, they are referring to the fact that they can make meaningful choices - which is perfectly true given determinism."
What "people"?? Using what "people" say or mean is a way of avoiding or changing the discussion and debate that has existed in the field of philosophy for centuries. What if most "people" meant something different than you regarding QM? Would that shake up the scientific or philosophic world regarding questions about QM and it's relationship to determinism? Or better yet...what if most "persons" think of 'determinism' as meaning something different than you, or different from the field of physics or philosophy? Would you then say that determinism is not deterministic?
1/ Ok, so Quantum theory is only about very small events (so no macroscopic randomness), and chaos theory is all about a lack of knowledge of very small events (so not a genuine randomness) that is amplified to the macroscopic world.
ReplyDeleteBut now combine both... And you have a genuine macroscopic randomness.
2/ The many-world interpretation is *very* controversial indeed. One could argue it is a phenomenaly empty theory (no clear derivation of the probabilities of events). It seems that Ockham's razor has been cutting too well...
3/ Do not turn your own ignorance into ontology? It appears that epistemology and ontology are all mixed together in QM and cannot be clearly disentangled. After all, "waves of probability" (that's what they are from a phenomenological point of view) "really" do interfere with each other. That deep question cannot be simply dismissed.
All these remarks are not meant to promote frequentism, but I'm sure we can find a better defense of Bayesianism...
Ian, thanks for your interesting article. As a physicist who specializes in foundations of quantum mechanics, I feel obligated to comment on some of your remarks about determinism and chance in QM:
ReplyDelete<< For reasons that I do not have space to get into, the idea that the U-238 atom has some sort of physical “hidden variable” that would determine the time of its decay, if only we could measure this variable, is pretty much ruled out (by something called Bell’s theorem). >>
Bell's theorem does not say that. Bell's theorem says that no theory of *locally causal* hidden variables can reproduce all the predictions of QM. However, there exist a number of nonlocal-causal (and local-retrocausal) hidden variable theories which have been shown to reproduce the predictions of standard QM (to the extent that the predictions of standard QM are well-defined) for all experiments. Perhaps the most well-known of such nonlocal-causal hidden variable theories is the pilot-wave theory of de Broglie and Bohm. In fact, it is well-established (see Speakable and Unspeakable in Quantum Mechanics) that Bell himself was led to discover his theorem as a result of first noticing that pilot-wave theory is a manifestly nonlocal theory. If you're interested, I can send you some references.
<< So prima facie, it appears that nature itself is characterized by objective, irreducible randomness — therefore, chance is objective, therefore frequentism is saved! >>
This sounds to me like a non sequitur. Frequentism does not imply that chance has to be 'objective' in the sense of being fundamental. Indeed, even in deterministic hidden variable theories such as pilot-wave theory, a frequentist interpretation of probability is used to prove that pilot-wave theory reproduces the predictions of standard QM (e.g. such as in the case of radioactive decay) for all experiments.
<< Yudkowsky has done an excellent job of arguing for the fully deterministic Many Worlds Interpretation of QM, and so has Gary Drescher in “Good and Real,” so I am not going to try to recapitulate it. In essence, all you need in order to reject the standard interpretation above (usually called the Copenhagen interpretation) is to properly apply Ockham’s razor and guard against mind-brain dualism. >>
Yudkowsky seems to be unaware that there is considerable controversy in the foundations of physics about whether a Many Worlds interpretation of QM can even make sense of probabilities (whether in a frequentist or Bayesian framework). It's not nearly as simple as 'properly applying Ockham's razor and guarding against mind-brain dualism'. For a broad introduction to the controversies regarding the interpretation of probability in Many Worlds QM, I refer you to the videos of this conference:
http://users.ox.ac.uk/~everett/
Cheers,
Maaneli Derakhshani
New question from a philosophy angle rather than a physics one; is determinism as a proposition governing any system, including the 'total' system of the universe, a falsifiable one? I say it is not; any inexplicable variability can be explained away as the result of a deterministic phenomenon one has not yet identified. As well it should be; this is good motivation from the point of view of science, which operates under the aegis of a mandate of physicalism, or causal closure of the physical, as it is also known. I find very tenuous the leap from there to the statement that determinism has so far withheld scrutiny as a scientific theory rather than being an indispensable item in the framework of the human activity called science that can never itself be part of a scientific theory (i.e. join the inside of that it bolsters).
ReplyDeleteThis is not meant as a criticism of Bayesianism or Mr. Pollock; I consider myself a Bayesian, and Mr. Pollock is certainly right to discount claims of QM as a falsification of ontological detrerminism, and on exactly these same grounds. What it is meant to argue is that Bayesianism's truer virtue is that epistemological uncertainty includes any ontological uncertainty as a subset; what is finally uncertain is certainly part of what I don't know for certain. It is a necessary condition for actually doing statistical analysis of a scientific experiment that the difference between variability due to uncontrollable and unobservable variability in initial conditions, that due to measurement error, and that due to true indeterminism, if it be real, be counted together in the great blue yonder of the stochastic epsilon (or u, in other notation), and the focus be on the obtainable range of knowledge, not the source of the unknowable.
Ian
ReplyDelete> "I disagree. Extensively, when people say they have free will, they are referring to the fact that they can make meaningful choices - which is perfectly true given determinism."
What "people"?? Using what "people" say or mean is a way of avoiding or changing the discussion and debate that has existed in the field of philosophy for centuries. What if most "people" meant something different than you regarding QM? Would that shake up the scientific or philosophic world regarding questions about QM and it's relationship to determinism? Or better yet...what if most "persons" think of 'determinism' as meaning something different than you, or different from the field of physics or philosophy? Would you then say that determinism is not deterministic?
Thanks to all for the excellent object level responses! Suffice it to say I am somewhat less confident of the QM part of the argument pending some of the suggested reading; thanks to Maaneli for the recommendations.
ReplyDelete@ABH: I don't think that works, for one thing because "possible" seems to be used here to mean "discernible," for another because "no pattern" is not really well defined; even a very ugly sequence can still be classified as fitting a pattern - just a very heavily specified pattern.
@Hector M: "But how do I come to have more or less knowledge, say, on the likelihood that tomorrow will rain, or that "most coins" give tails 50% of the time? "
Well, if you're asking where the 50% prior comes from, it's usually just a maximum entropy prior that divides the 100% probability mass equally between however many possible outcomes there are (in this case 2, ceteris paribus). No physical model or frequencies need be involved YET. Then one usually speaks of evidence changing that prior in one direction or another.
"Frequency. Id est repeated observations of the same phenomenon leading to a frequentist assessment of probability."
Well, strictly, repeated observations of the SAME phenomenon are going to lead to frequency 1 or 0, except *arguably* if we're talking about quantum randomness. To get a 50% (or whatever) frequency, you're going to have to have a subtly different phenomenon each time (which is the case for e.g., the coin flip). The more you try to systematize the experiment to actually give you the SAME phenomenon (e.g., by using a mechanical coin-flipping machine, or making the same hand movement each time), the more the phenomenon will do the EXACT same thing every time and render your talk of probability unnecessary.
Don't get me wrong, I'm perfectly okay with using frequencies in lots of situations, I just deny the identity with probabilities. Maybe you have a long term frequency of 50% for heads, but if for example I know the magician's trick about starting positions for coin flips, my probability for this toss is going to deviate from the observed 50% frequency, whereas yours may not.
ReplyDelete"For instance, a physical theory of the probable trajectories of flat cylindrical bodies, like coins, may predict that they are likely to fall on each side 50% of the time."
Right, that would mean that such a theory was an essentially ignorant theory of coin flipping - i.e., it doesn't improve on the ignorance prior. There may be good reasons for that ignorance - indeed there are by hypothesis (bla bla small diffs in initial conditions). But it would be wrong to take that theory as evidence that the "true" probability is "really" about 50%. The theory is just telling you that it doesn't have any resolving power to help you predict coin flips, so you might as well stick with your ignorance prior.
"I may assign a high likelihood to an event (such as that tomorrow will be rainy), but no actual outcome (raining or sunny) may legitimately refute my statement: rain may have been extremely likely even if tomorrow turns out to be sunny after all: what one ACTUALLY means with a meteorological forecast is that over a great many days like today, in MOST cases (i.e. with high frequency) the next day was rainy."
Certainly; this is usually referred to in the context of "calibration." However, a single contrary outcome may not outright refute a probability assignment, but it will certainly give evidence that it was a bad one. To see this, take an extreme case: if I say "a million to one in favour of rain tomorrow" and it doesn't rain, my assessment is effectively falsified, even though I *could* just be very unlucky.
"It may be comforting to learn that a particular piece of scientific knowledge does not imply the demise of some of our cherished beliefs, but that has nothing to do with the validity of such piece of scientific knowledge."
While agreeing with you, I tend to see this in different terms. If science or philosophy apparently undermines a cherished belief or intuition, I tend to first look at whether our naive ideas about what was Philosophically Absolutely Necessary for said belief/intuition are correct, rather than rushing to "disprove" the belief/intuition.
@contrarianmoderate: I'm not assuming my conclusion per se, although it looks that way because the argument has a slightly unusual logical structure. First I talk about frequentism GIVEN that determinism is true (which is what you read there), then I question whether & to what extent determinism is true.
ReplyDelete@DJD: I admit there is a certain tactical aspect to my expressed opinion on free will, but I think it's well founded. Contracausal free will is incompatible with determism, indeed, but it's also internally contradictory and metaphysically conceited. And yet people DO make meaningful choices. So yes, I'm okay with making the argument "free will exists but can't be contracausal," rather than flatly stating "you have no free will" and then accepting the misunderstandings as people take me to mean "you're an automaton that can't make choices."
Okay, my cat has now jumped onto the computer 30 times (I counted) and it's time for work, so I think I better reply to the other comments later. Cheers!
Ian, it is a pity I chose the coin example to develop my argument. Since the chances in the coin case are fifty-fifty, this allows for the "entropy" and "ignorance" arguments to be introduced. But suppose the prior is something different from that particular case. For instance, in the case of a biological variable (such as the length of monkey tails or the diameter of human heads) the probability of a given size (or the probability of sizes lower than a given size) will probably be Gauss-normal. It could be expressed a priori in terms of z-scores (in units of standard deviation above or below the mean) or could be anticipated in absolute values once we know (from frequency distributions) the mean and SD of the relevant variable. Thus my particular prior could be 2.68% or suchlike. In other cases, theory or experience dictate that the distribution is not normal, but for instance log-normal (as with many income effects), and the values are even less intuitive. You can only arrive to those values by theory corroborated by experience, or by experience alone (you may anticipate them from theory alone, but you'd better also seek empirical corroboration).
ReplyDeleteAs for repeating "the same" phenomenon, I think this is a case of angels dancing on a pinhead. A do not know of ANY measurement, however precise, that has absolutely no variability. Try to measure the angles of a triangle with the most precise measuring rod you can get, with nanometers of precision if you can, and you'll never get "exactly" 180° in all occasions: you'll always get an "error distribution", albeit of a very diminutive scale. Even at magnitudes well above QM. Whether a "totally precise" measurement can be done is a rather otiose question (what is the "precise value" of the transcendental number pi?). Whether there would be any variability among measurements made with such a perfect measuring rod is another question I think otiose (just because such a rod does not exist). I am frankly not worried much by the map-territory distinction if it is carried to these extremes.
If one renounces this folly of exactness, then measures are always random variables.
Besides, think of what we mean by a "map" as distinct from a "territory". In the end, our perceptive and cognitive apparatus is also part of physical reality, and the interaction of our retinas with incoming light in the presence of light-reflecting objects is another physical phenomenon. The territory, in other terms, includes the map (and the mapper). And the mapper is, like the rest of us, a biological entity fumbling its way around with imperfect sensing organs.
Once upon a time scientists were subjected to the judgment of philosophers, who were supposed to issue verdicts on the validity or otherwise of scientific statements. Not only on their logical coherence, which is alright, but on their substantive validity.
The time has come, I modestly surmise (following in fact an idea of John Dewey), to turn the tables. Science should judge whether a philosophical question makes any (substantive) sense. And science (like ancient philosophers thought about philosophy) needs no external justification; its proof, like that of the proverbial pudding, is in practice: it just works. It has evolved techniques and procedures that do work, at least in the imperfect and perfectible way any natural process works. And it has worked wonderfully for centuries now.
It would be nice if philosophical musings start from that stark fact (science does usually work), and subject themselves to the verdict of science as for the substantive worth of their speculations.
Hector M.
ReplyDeleteBeautifully stated. I wish I had the ability to express my criticisms as diplomatically as you.
DJD, I'm surprised you find that I express my criticisms diplomatically. I thought I was being not exactly diplomatic, but extraordinarily crash, provocatorial and politically incorrect. But it's all in the eye of the beholder, as our amiable Ian would put it.
ReplyDeleteI'm a little concerned that Rationally Speaking is being colonized with Yudkowsky-speak. For example, I wonder if there is a more straightforward way of expressing the following:
ReplyDelete>>The problem is that unpredictability, randomness, chaos — whatever you want to call it, however unavoidable it is — is a map-level phenomenon, not a territory-level phenomenon.
So sure, you can tell me that there are some things that exist, but are un-mappable even in principle. But please don’t tell me that un-mappability itself is out there in the territory — that’s just flat out insane! You’re turning your own ignorance into ontology!<<
Hector M
ReplyDeleteSee....You did it again...with grace and diplomacy.
Nick
ReplyDeleteI'm curious...What type of knowledge is capable of determining if something is either mappable or un-mappable. Empirical generated knowledge? Logical Tautological) knowledge? Inferential? What method can be used to differentiate one from the other?
The comment that you quote from Feller is consistent with the view that probability is a mathematical term, and can only be applied to the world within the context of an appropriate mathematical model. Maybe Feller was a frequentist, but that isn't obvious from your quote.
ReplyDeleteI would expect many mathematicians, self included, to take the mathematical model approach. So I agree with you that probability is not an objective property of the universe - it is something that exists only in mathematical models.
The coin toss case fits within a well understood model (ignoring possible quibbles about the fairness of the coin). People following the mathematical model approach would normally assume that standard model for the coin toss case.
One could model belief and uncertainty, which allows your subjective account of probability to also be considered. And, for some purposes, that's a reasonable thing to do.
DJD,
ReplyDeleteI was quoting Ian. I'm afraid I don't know the answers to your questions.
@Hector:
ReplyDelete"You can only arrive to those values by theory corroborated by experience, or by experience alone (you may anticipate them from theory alone, but you'd better also seek empirical corroboration)."
I entirely agree, but this just goes to show that the ignorance prior mentioned above should only be applied when you're, well, ignorant. In the case of head diameters, you're not entirely ignorant, and you may indeed guess that such a phenomenon follows a gaussian distribution centered around 200 mm or so, then go out and start measuring heads.
"As for repeating "the same" phenomenon, I think this is a case of angels dancing on a pinhead. I do not know of ANY measurement, however precise, that has absolutely no variability."
Indeed, but it seems to me that this misses the point a little bit, which is not about measurement accuracy per se but about the randomness that is supposed to be in the experiment itself. The point is that in these "random experiments" you seem to have two desiderata working at cross purposes. On the one hand, you want the different instances of the phenomenon to be sufficiently similar that you are actually talking (in a colloquial sense) about one phenomenon, not many (e.g., in the case of the coin flip, it's better not to do a coin drop as well, since perhaps that has a different dynamic). On the other hand, in order to have variability at all, the different instances of the phenomenon must be sufficiently distinct from each other to give a fairly broad distribution rather than to give one event, repeated over and over.
Just to make this clear, the problem is that (to stick with the coin toss case) it's actually undesirable to systematize the experiment (by, say, making a mechanical coin flipper) - holding onto ignorance is incentivized. And yet going in the opposite direction, by introducing extra variability you risk broadening the experiment to the point that its different trials are not even about the same thing.
"The time has come, I modestly surmise (following in fact an idea of John Dewey), to turn the tables. Science should judge whether a philosophical question makes any (substantive) sense."
Are we still talking about free will here? Anyway, I'd say it's important not so much to turn the tables, but to have the judgments go both ways. There are many cases where philosophers ignore scientific results to their detriment, and there are many cases where scientists fail to notice the philosophical assumptions of their work. One thinks both of the naive things that neuroscientists say, and the naive things that philosophers of mind say.
As for diplomacy, I'll pile on too - you're a pleasure to talk to.
@Nick Barrowman:
"I'm a little concerned that Rationally Speaking is being colonized with Yudkowsky-speak."
That's a fair criticism. I hummed and hawed about using the map/territory metaphor for epistemology & ontology for a long time, but finally decided that it was too useful to resist. But perhaps there is another, better metaphor that we could use in its place?
Perhaps I am not understanding this and someone can enlighten me.
ReplyDeleteThere was talk here of Macro systems being (almost?) always deterministic dismissing rare quantum experiments involving the tenth decimal place.
But doesn't the micro generalize up to the macro ala a "Schrodinger's Cat" mechanism?
That is, I can open a casino with a mechanical coin flipper that flips a "head" when the tenth decimal of some quantum experiment is even and flips a "tail" when the decimal is odd. I can invite the Laplacian Calculator to play, give it a free cocktail, and watch while it loses its transitors.
Ian, the piece I quoted was:
ReplyDelete>>The problem is that unpredictability, randomness, chaos — whatever you want to call it, however unavoidable it is — is a map-level phenomenon, not a territory-level phenomenon.
So sure, you can tell me that there are some things that exist, but are un-mappable even in principle. But please don’t tell me that un-mappability itself is out there in the territory — that’s just flat out insane! You’re turning your own ignorance into ontology!<<
I think what you're saying is that randomness is an aspect of the tools we're using (perhaps unavoidably), not the phenomena we're trying to understand. Is that right?
Nick
ReplyDelete>"But please don’t tell me that un-mappability itself is out there in the territory"
How do you know that it's not "out there"...if you don't know what method is use to differentiate between mappable and un-mappable?
"Maybe you have a long term frequency of 50% for heads, but if for example I know the magician's trick about starting positions for coin flips, my probability for this toss is going to deviate from the observed 50% frequency, whereas yours may not.”
ReplyDeleteThe observed 50% frequency? A long term frequency of 50% can’t be observed since it’s a limit.
If the magician or a machine can repeatedly flip the coin with the same starting position, and we can obtain the limit probability. So a frequentist’s interpretation of the probability is still possible.
What is the probability that Obama will be re-elected in 2012? The probability cannot be interpreted in a frequentist's way because the election simply can’t be repeated under the same conditions.
@jrhs: "What is the probability that Obama will be re-elected in 2012? The probability cannot be interpreted in a frequentist's way because the election simply can’t be repeated under the same conditions."
ReplyDeleteYes, I'm aware that frequentism dismisses this kind of question as the proper meat of probability because it's a one-off. However: (a) every event is a one-off to a greater or lesser degree, which begs the question of where to draw the line; (b) there are facts of the matter which bear on the question of whether Obama is less or more likely to win, which a rational person should take into account if they need to plan for either eventuality. It turns out that there are ways of reasoning about that likelihood numerically, the correctness of which can be demonstrated (or not) in a given predictor's track record. Why refuse to call this a probability?
@Nick Barrowman:
"I think what you're saying is that randomness is an aspect of the tools we're using (perhaps unavoidably), not the phenomena we're trying to understand. Is that right?
Yes, that's right, though I might add after "tools" something about our limited metaphysical position (as creatures within physics as opposed to outside of it, looking in).
The quick version: essentially every instance of apparent randomness that we have dealt with historically has turned out to be explained or at least explainABLE as a result of our epistemic limits alone, not as a result of some sort of strange exception to causal physical laws. So I am betting against any theories/interpretations that make randomness fundamental to physics.
@Tom:
ReplyDelete"But doesn't the micro generalize up to the macro ala a "Schrodinger's Cat" mechanism?
That is, I can open a casino with a mechanical coin flipper that flips a "head" when the tenth decimal of some quantum experiment is even and flips a "tail" when the decimal is odd. I can invite the Laplacian Calculator to play, give it a free cocktail, and watch while it loses its transitors."
I like the image! =)
Yes, I have no argument with this. If you wish, you can set things up so that quantum indeterminacy has macroscopic effects; the point I was trying to make is that these situations are very unusual unless you deliberately set them up that way, and so quantum indeterminacy has little bearing on the vast majority of the classic problems of probability.
@DJD:
"I'm curious...What type of knowledge is capable of determining if something is either mappable or un-mappable. Empirical generated knowledge? Logical Tautological) knowledge? Inferential? What method can be used to differentiate one from the other?"
Well, "mappable" stands for "predictable in principle" and "un-mappable" for "objectively random." So it's obvious how to show that something is predictable - just predict it. As for showing that something is objectively random, I don't think there's any way to prove that without stepping outside of physics. But as I have said before, I don't even think "objectively random" is a meaningful term, any more than "objectively delicious."
"Why refuse to call this a probability?"
ReplyDeleteOf course, it's a probability. We simply can't apply the frequentist's interpretation to the election case.
Random = Unknown. A random toss means we don't know what the result will be. In the magician example, the magician knows what it will be. However, I can predict that it'll be a tail with a probability of 0.5 (a fair coin). Just being picky. :)
Good post.
We can split physical behaviors into three groups:
ReplyDelete1) those where purely deterministic models are highly accurate (i.e. flightpath of a cannonball)
2) those where purely deterministic models are not accurate (i.e. quantum behavior)
3) those where we don't yet have strong deterministic models (i.e. human social behavior)
Ian argues that behaviors in category (3) will likely move into category (1), because historically this has occurred many times. Ian grants that the existence of category (2) casts some doubt on this hypothesis, but argues that it's limited because category (1) is larger than category (2).
However, behaviors that remain in category (3) are presumably harder to model effectively than those in category (1) or (2). Given that these behaviors haven't yet been modeled effectively, they probably require models that are relatively complex. Since models that required objective randomness are inherently more complex than models that are purely deterministic, it seems likely that behaviors in category (3) will end up in category (2) more often than has historically been the trend.
To be clear, I'm not suggesting that human behavior is governed by quantum mechanics itself. Rather, I'm suggesting that human behavior may be governed by some as-yet-undiscovered mechanism that can't be accurately modeled purely deterministically.
Ian cites my words: ""As for repeating "the same" phenomenon, I think this is a case of angels dancing on a pinhead. I do not know of ANY measurement, however precise, that has absolutely no variability."
ReplyDeleteAnd then comments upon them:
"Indeed, but it seems to me that this misses the point a little bit, which is not about measurement accuracy per se but about the randomness that is supposed to be in the experiment itself."
In fact, for some specific sets of phenomena, e.g. those covered by quantum theory, we have a theory stating that there is randomness in the objective process itself, irrespective of random variability in measurements, instruments and the like. In fact, what we have with QM is not exactly a THEORY (we cannot exactly explain how and why the weird things apparently going on at subatomic level can occur at all; we only have a set of equations that work, in fact work wonderfully well. In a sense, we are in this regard in the same predicament in which Newton (and Newtonians) were before the arrival of Einstein's General Relativity: gravitation theory worked wonderfully well (with some minor quirks like the orbit of Mercury) but they did not have a clue about what gravitation was, or how it could act at a distance.
In other cases, we do not have such a theory. We simply observe phenomena that appear to vary randomly. The variability may be "in the mapping" or "in the territory", to follow Ian's jargon, but in most cases we are unable to tell: measurements are too rough, and theory too poor, to tell which is the right answer. Some disciplines have attempted to develop a theory; for instance Economics models economic behavior as a mixture of perfectly rational behavior plus random error caused by unnamed and unidentified individual causes. But even within Economics such theory (neoclassical economics) has been hotly disputed, with different schools proposing various possible alternatives. Just as QM and GR have not been reconciled in Physics, micro and macroeconomics are still not fully integrated, and both are poorly corroborated by experimental or observational evidence (behavioral and experimental economics are novelties not yet digested by mainstream economics).
Excellent discussion, Ian.
Ian, you write that, "unpredictability, randomness, chaos . . . is a map-level phenomenon, not a territory-level phenomenon". How do you know that? If we do indeed have "epistemic limits" that are forever insuperable, then on what basis can you make your claim?
ReplyDeleteIf we don't know then we don't know. Wouldn't it be similar to the fabled "black box" into which no one can ever look (or get any information from the inside)? It would be just as absurd to claim that the box contains a chicken than to say it doesn't. In either case, you do not know and never will, so why speculate? It seems to me that you are saying in effect, "yes, I agree we are ignorant of "x"(the unmappable territory), but let me tell you right now -- I know it contains a chicken!"
I don't see how what the frequency-results (or Bayesian or whatever) would be IF you repeated this thing many times has any relevance whatever to what the result will be if you do it only once. Eg. the Monty Hall case: it's both theoretically and empirically clear that you will win more often if you switch your choice; but if you play the game only ONCE, why does it matter whether you switch or not? Seems to me it doesn't matter at all.
ReplyDeleteI want to thank Ian for his post. Just a couple of thoughts on this provocative discussion:
ReplyDelete1. Ian's post doesn't mention what is arguably the most common objection to frequentism among probability theorists who do reject it as a general interpretation of the probability concept -- namely, that the ratios that define the frequencies can only be specified relative to some reference class, and if you vary the reference class you'll vary the frequencies. But frequency interpretations of the probability calculus don't specify an algorithm or decision procedure for picking a unique reference class. The result is that for a given question, like "How likely is that a 44 year old man will live to 80?", frequency approaches don't yield a unique answer, they yield different answers depending on the reference class you pick (relative to males, relative to males who earn more that 40,000 dollars a year, relative to white males, relative to smokers, relative to non-smokers, etc.)
I suppose you could spin this objection into a version of Ian's where we're talking about different reference classes related to types of coin tosses and how much information we have have about the initial conditions and forces acting on the coin, the mass distribution within the coin, etc.
2. My understanding is that the strongest arguments for some kind of objective chance in quantum mechanics have to do with the fact that so-called "ignorance" interpretations of the quantum probabilities (the natural attitude of someone assuming a classical worldview) are actually incompatible with the standard quantum formalism. (Michael Redhead's book Incompleteness, Nonlocality and Realism gives one version of the argument, I think; it has to with how the algebra of statistical mixtures differs from the algebra of "pure" quantum states).
I once asked a professor of mine whether Bohm's deterministic interpretation undermined these sorts of algebraic arguments against ignorance interpretations of the probabilities. He wasn't sure, and I've never really followed it up so I can't say.
3. I think the prevailing view among philosophers of physics these days is that even the strict truth of classical physical theories wouldn't guarantee predictability, much less metaphysical determinism. John Earman is the expert on this, see his book A Primer on Determinism. Newtonian mechanics, for example, is only deterministic if certain mathematical conditions are satisfied, and the theorems that establish the existence and uniqueness of solutions (the mathematical correlate of deterministic evolution) are only valid over finite time scales. There are also results from computability theory showing how certain types of classical dynamical systems can have non-computable solutions, which is a much stronger form of unpredictability than chaotic indeterminacy (even Laplace's demon couldn't predict the behaviors of these kinds of systems).
For these reasons, the metaphysical view of a deterministic world, insofar as it was motivated by the perceived determinism of classical physics, is generally regarded as a kind of modern myth by contemporary philosophers of physics. Even if Newton's or Maxwell's laws were strictly true in our world, neither metaphysical determinism nor epistemological predictability would follow. This is a result that few outside of the philosophy of physics seem to appreciate, I wish it were more widely known, since it bears on so many philosophical discussions.
Kevin, I think, makes a good comment: "...the metaphysical view of a deterministic world, insofar as it was motivated by the perceived determinism of classical physics, is generally regarded as a kind of modern myth..."
ReplyDeleteI was about to post something in a similar vein: for determinism, you need to make predictions based upon the principles of physics. But our physical laws are themselves based upon assumptions (e.g. Euclid's axioms).
Godel's theorem shows that ANY logical system is inconsistent or incomplete in the sense that there are undecideable propositions. So, if we had perfect knowledge of the position and tragectory of each particle, our system for evaluating the result may not be consistent or comprehensive. Moreover, the laws were built upon a frequentist's argument. Will they be the same tomorrow? I can only say that light will move away from me at 186,000 mi/sec because it has in the past. Some speculate that even the constants may change.
If I understand the many worlds interpretation fully, it doesn't provide any mechanism for predicting in which world-branch we are - i.e. from our point of view randomness is maintained, in spite of a deterministic ensemble of worlds.
ReplyDeleteEven that divine calculator would either be "in" our quantum world and resort to probabilities (derived from a frequentist interpretation of previous observations?) or outside it and forced to consider all world branches as an ensemble, unable to posit chains of discrete (or what we would call discrete) events and reduced to descriptive statistics...
Also remember that, while quantum randomness (if it exists) is usually effectively unimportant for classic "real-life" systems, it can have direct impacts in the macroscopic world - mutations are a good example. Add to that the fact that chaos theory is studied as a possible direct link between quantum effects and macroscopic effects (since it provides well-characterized mechanisms to amplify fluctuations in stead of damping them).