Friday, May 06, 2011

Razoring Ockham’s razor


by Massimo Pigliucci
Scientists, philosophers and skeptics alike are familiar with the idea of Ockham’s razor, an epistemological principle formulated in a number of ways by the English Franciscan friar and scholastic philosopher William of Ockham (1288-1348). Here is one version of it, from the pen of its originator:
Frustra fit per plura quod potest fieri per pauciora. [It is futile to do with more things that which can be done with fewer] (Summa Totius Logicae)
Philosophers often refer to this as the principle of economy, while scientists tend to call it parsimony. Skeptics invoke it every time they wish to dismiss out of hand claims of unusual phenomena (after all, to invoke the “unusual” is by definition unparsimonious, so there).
There is a problem with all of this, however, of which I was reminded recently while reading an old paper by my colleague Elliot Sober, one of the most prominent contemporary philosophers of biology. Sober’s article is provocatively entitled “Let’s razor Ockham’s razor” and it is available for download from his web site.
Let me begin by reassuring you that Sober didn’t throw the razor in the trash. However, he cut it down to size, so to speak. The obvious question to ask about Ockham’s razor is: why? On what basis are we justified to think that, as a matter of general practice, the simplest hypothesis is the most likely one to be true? Setting aside the surprisingly difficult task of operationally defining “simpler” in the context of scientific hypotheses (it can be done, but only in certain domains, and it ain’t straightforward), there doesn’t seem to be any particular logical or metaphysical reason to believe that the universe is a simple as it could be.
Indeed, we know it’s not. The history of science is replete with examples of simpler (“more elegant,” if you are aesthetically inclined) hypotheses that had to yield to more clumsy and complicated ones. The Keplerian idea of elliptical planetary orbits is demonstrably more complicated than the Copernican one of circular orbits (because it takes more parameters to define an ellipse than a circle), and yet, planets do in fact run around the gravitational center of the solar system in ellipses, not circles.
Lee Smolin (in his delightful The Trouble with Physics) gives us a good history of 20th century physics, replete with a veritable cemetery of hypotheses that people thought “must” have been right because they were so simple and beautiful, and yet turned out to be wrong because the data stubbornly contradicted them.
In Sober’s paper you will find a discussion of two uses of Ockham’s razor in biology, George Williams’ famous critique of group selection, and “cladistic” phylogenetic analyses. In the first case, Williams argued that individual- or gene-level selective explanations are preferable to group-selective explanations because they are more parsimonious. In the second case, modern systematists use parsimony to reconstruct the most likely phylogenetic relationships among species, assuming that a smaller number of independent evolutionary changes is more likely than a larger number.
Part of the problem is that we do have examples of both group selection (not many, but they are there), and of non-parsimonious evolutionary paths, which means that at best Ockham’s razor can be used as a first approximation heuristic, not as a sound principle of scientific inference.
And it gets worse before it gets better. Sober cites Aristotle, who chided Plato for hypostatizing The Good. You see, Plato was always running around asking what makes for a Good Musician, or a Good General. By using the word Good in all these inquiries, he came to believe that all these activities have something fundamental in common, that there is a general concept of Good that gets instantiated in being a good musician, general, etc. But that, of course, is nonsense on stilts, since what makes for a good musician has nothing whatsoever to do with what makes for a good general.
Analogously, suggests Sober, the various uses of Ockham’s razor have no metaphysical or logical universal principle in common — despite what many scientists, skeptics and even philosophers seem to think. Williams was correct, group selection is less likely than individual selection (though not impossible), and the cladists are correct too that parsimony is usually a good way to evaluate competitive phylogenetic hypotheses. But the two cases (and many others) do not share any universal property in common.
What’s going on, then? Sober’s solution is to invoke the famous Duhem thesis.** Pierre Duhem suggested in 1908 that, as Sober puts it: “it is wrong to think that hypothesis H makes predictions about observation O; it is the conjunction of H&A [where A is a set of auxiliary hypotheses] that issues in testable consequences.”
This means that, for instance, when astronomer Arthur Eddington “tested” Einstein’s General Theory of Relativity during a famous 1919 total eclipse of the Sun — by showing that the Sun’s gravitational mass was indeed deflecting starlight by exactly the amount predicted by Einstein — he was not, strictly speaking doing any such thing. Eddington was testing Einstein’s theory given a set of auxiliary hypotheses, a set that included independent estimates of the mass of the sun, the laws of optics that allowed the telescopes to work, the precision of measurement of stellar positions, and even the technical processing of the resulting photographs. Had Eddington failed to confirm the hypotheses this would not (necessarily) have spelled the death of Einstein’s theory (since confirmed in many other ways). The failure could have resulted from the failure of any of the auxiliary hypotheses instead.
This is both why there is no such thing as a “crucial” experiment in science (you always need to repeat them under a variety of conditions), and why naive Popperian falsificationism is wrong (you can never falsify a hypothesis directly, only the H&A complex can be falsified).
What does this have to do with Ockham’s razor? The Duhem thesis explains why Sober is right, I think, in maintaining that the razor works (when it does) given certain background assumptions that are bound to be discipline- and problem-specific. So, for instance, Williams’ reasoning about group selection isn’t correct because of some generic logical property of parsimony (as Williams himself apparently thought), but because — given the sorts of things that living organisms and populations are, how natural selection works, and a host of other biological details — it is indeed much more likely than not that individual and not group selective explanations will do the work in most specific instances. But that set of biological reasons is quite different from the set that cladists use in justifying their use of parsimony to reconstruct organismal phylogenies. And needless to say, neither of these two sets of auxiliary assumptions has anything to do with the instances of successful deployment of the razor by physicists, for example.
So, Ockham’s razor is a sharp but not universal tool, and needs to be wielded with the proper care due to the specific circumstances. For skeptics, this means that one cannot eliminate flying saucers a priori just because they are an explanation less likely to be the correct than, say, a meteor passing by (indeed, I go in some detail into precisely this sort of embarrassing armchair skepticism in Chapter 3 of Nonsense on Stilts). There is no shortcut for a serious investigation of the world, including the spelling out of our auxiliary, and often unexplored, hypotheses and assumptions.
—-
** Contra popular opinion even among philosophers, this is not the same as the Duhem-Quine thesis, which is a conflation of two separate but related theses, one advanced by Duhem (discussed here) and one — later on — by Quine (to be set aside for another discussion).

44 comments:

  1. Sober's Bayesian construal of parsimony in terms of prior probabilities is interesting, but if you're talking about priors, then it isn't really strictly parsimony anymore, is it?

    The way I read Williams is that he's providing two distinct (although maybe related) arguments against group selection: (1) the conditions that lead to it are relatively rare, and (2) it is less parsimonious than individual or genic selection.

    On Sober's analysis these two arguments would collapse into one. But I think Williams was definitely committed to a strong ontological parsimony claim: “Economy and efficiency are universal characteristics of biological mechanisms" (Adaptation and Natural Selection p. 41).

    Not to say that he should have been, of course.

    ReplyDelete
  2. Great article!

    I don't think many people would interpret Ockham's razor as saying the universe is "as simple as it could be," which is probably false or meaningless, and certainly would need to be argued for independently.

    The way I interpret it is as an intuitive extension of the conjunction law of probability that P(A&B)≤P(A). Ockham's razor takes that and extends it to say something more like P(A&B)≤P(C); i.e., that even if we know nothing about the likelihood of individual hypotheses A, B and C, a conjunction of two of them is usually less probable than a single one, all other things being equal.

    So it's essentially a heuristic that says "Every AND operation inside a hypothesis decreases the probability of that hypothesis." But you can see how messy a heuristic it is by noting that a clever person can artificially insert and remove ANDs, as long as the argument stays qualitative. And lots of ideas that seem simple to humans (e.g., extraphysical souls) have a whole LOT of hidden ANDs.

    ReplyDelete
  3. iampollack: "
    I don't think many people would interpret Ockham's razor as saying the universe is "as simple as it could be," which is probably false or meaningless, and certainly would need to be argued for independently."

    Exactly my though.

    As George Hrab says in "Think for yourself"

    "All things being equal, the simplest answer's worth most".

    ReplyDelete
  4. Good read! I listened to Theodore Shick, author of "How to Think About Weird Things: Critical Thinking in a New Age," who set forth a "criteria of adequacy" for any given hypothesis that considers scope (does the hypothesis have wide explanatory power and not raise more questions that it answers?), simplicity (does the theory require extra unsubstantiated "extra things?", testability, fruitfulness (does the hypothesis make novel predictions and do these predictions go beyond that which the hypotehsis/theory already suggests?), and conservatism (does this go against what we already know?)

    Occam's Razor alone (simplicity) often isn't enough to measure competing hypotheses/theories.

    ReplyDelete
  5. In medicine Occam's razor is often invoked during the diagnostic process, however there is a corollary known as Hickam's Dictum which states: "the patient can have as many diseases as he damn well pleases!"

    ReplyDelete
  6. Good points about Okham. I can't resist pointing out, nitpicky though it is, that Kepler's ellipses actually WERE more parsimonious than Copernicus's model. Copernicus had the Earth moving on a circle whose center was moving on another circle, and that center was also in circular motion (about the Sun). So Copernicus had a total of nine parameters he could jiggle (three radii, three angular velocities, and three "initial" angles), while Kepler only had five. (By my count: semi-major axis, eccentricity, one angle to orient the ellipse, one initial angular velocity, and one initial angle.)

    ReplyDelete
  7. Indeed, in my opinion, Kepler's theory is more parsimonious. It is more general than Copernicus, and in this sense is more parsimonious.

    On the other hand, it is good to recall what Einstein said:

    "It must be simple but not simpler."

    And I agree completely. I think these statements are quite general, and they should be taken with a little grain of salt, and with a lot of honesty.

    On the other hand, the idea behind parsimony principle is to avoid infinite loops. In other words, one can make things as complicated as one wants. Nothing prevents doing that. That's what I think.

    ReplyDelete
  8. tdxdave, but the issue is precisely that ceteris paribus (all other things being equal) pretty much never applies. So Sober's point stands: the razor needs to be applied given a certain background, it is not a universal logical or even epistemological rule.

    Robert, good point, thanks. But wasn't Copernicus' system that complex precisely because a simple circular orbit wouldn't suffice? In some sense he was forced to reintroduce the much despised epicycles.

    ReplyDelete
  9. I dont think its fair to compare Copernicus theory with Keplers in that regard since Keplers theory explains the observed data better than that of Copernicus. Simply put all things are not equal.

    Also one could say that Copernicus theory is more complex (or at least has a lower prior probability) since it demands that the eccentricities are equal to zero while Keplers makes no such demand (or assertion)

    ReplyDelete
  10. Which makes the point that what count as "more complex" or "simpler" depends on how one slices the theoretical edifice...

    ReplyDelete
  11. The version I'm more familiar with, "Entia non sunt multiplicanda praeter necessitatum" (Don't mulitply entities unnecessarily) seems less subject to this criticism.

    ReplyDelete
  12. Perhaps, but it's also somewhat trivial. A lot rides on that "unnecessarily."

    ReplyDelete
  13. The importance of Ockham's razor is not "the simplest explanation is best". Its utility rests in shearing away unnecessary details. When we construct theories with a huge number of details, each time we add something, it's another portal for error to enter. But, of course, complex explanations are necessary to explain phenomena; if we stopped at F=ma because it was simple, then relativity would never have developed. Simple explanations are not always the best (or most accurate) ones. The fundamental principle behind Ockham's razor is to eliminate unnecessary details whenever we can, not to extol the virtue of "simplicity".

    ReplyDelete
  14. Massimo, the link to your friend's website isn't working.

    ReplyDelete
  15. Massimo,

    With Sober, I hold no brief for the view that simplicity is justified on metaphysical grounds; I am agnostic here. However, contra Sober, I do hold that simplicity is a necessary element in the process of discriminating competing hypotheses, scientific or otherwise, and for testing empirical models. In nuce, limiting the number of independent entities, principles, causes, equational coefficients, etc. is I think justifiable a priori.

    First, a primary goal of empirical models is to predict patterns of future sensory stimulation accurately, and when they fail- as they frequently do-, given the existence of auxiliary hypotheses, it is often difficult to determine exactly where the error(s) occurred. We ought not necessarily to reject the theory en bloc, rather we ought only to reject the defective bits or limit the theory's range of application. All else equal, if the theory is parsimonious, locating the defective bits and determining the models range of utility may, on balance, take less effort, consume less resources, and prove easier methodologically. (As an analogy, think of troubleshooting a mechanical system with numerous intricate working parts.)

    Second, with respect to hypotheses in general, I find Ockham's razor to accord with a general eliminative inductive framework, which possesses some similarity to Pratt's strong inference: devise alternative hypotheses, perform incisive tests, eliminate those hypotheses which fail the tests, repeat. In order to eliminate alternative hypotheses efficiently, we must be able to disconfirm the hypotheses' empirical predictions efficiently, which we cannot do if the hypotheses are not parsimonious.

    ReplyDelete
  16. Para, I doubt that either Sober or I would disagree with you in any specific case, but his point - and obviously I agree - is that there is no such thing as Ockham's razor tout court. It depends, precisely because other things are never equal.

    ReplyDelete
  17. Massimo,

    I suspect our differences here are superficial. As I have presented matters, do you not find parsimony to be an acceptable general epistemic rule? I understand the exigencies of particular scientific disciplines, but, as I understand it to be the very general maxim of simplicity, it seems to me that Ockham's razor ought to inform our empirical methods.

    ReplyDelete
  18. I guess I don't. I am persuaded by Sober's reasoning based on Duhem's thesis. One cannot consider just the focal theory, one needs to consider the web of auxiliary assumptions. There is nothing *intrinsically* more complex about group selection, it is a more complex explanation only because of everything else we know about biological organisms.

    ReplyDelete
  19. I suspect that because we find simplicity in accuracy, we tend to look for accuracy in simplicity.

    ReplyDelete
  20. Massimo,

    A primary precondition for the initial plausibility of an empirical model or hypothesis is that it accords well with established knowledge; on this point we do not disagree. My contention is rather this: I take Ockham's razor to counsel that we proceed with postulating simple hypotheses (which of course accord with established knowledge) so as to deduce more efficiently observations, devise tests which confirm / disconfirm those observations, and proceed with interpretations.

    If you object to Ockham's razor as a metaphysical principle, I share in your objection. But I do not see why you do not follow me my adherence to the methodological principle of simplicity.

    ReplyDelete
  21. Para,

    because simplicity is context-dependent, so there cannot be such a thing as a general principle to prefer simplicity. Take Sober's second example: cladistic reconstructions of phylogenies. We *know* that in plenty of cases parsimony fails precisely because evolution doesn't take the simpler path. It's a historical process, shit happens.

    ReplyDelete
  22. Massimo,

    Yes, our most reliable knowledge indicates evolution certainly does not take the simplest path and, as we proceed with hypothesis formation in the biological sciences, we must take this into account. However, I see this as a problem for ontological simplicity, not for methodological simplicity. Even in the course of evaluating alternative hypotheses in the biological sciences, we ought to keep our explanatory models simple so as to aid in prediction, confirmation & disconfirmation, and interpretation. Of course, this is to recognize completely that our hypotheses must take into account the established features of the subject of inquiry, which often require complex explanations (Williams failed at this crucial step).

    That aside, kudos for broaching this subject, Massiomo; all too frequently skeptics misuse Ockham's razor and treat it is as if it were a panacea for the ills of superstition.

    ReplyDelete
  23. There is no shortcut for a serious investigation of the world, including the spelling out of our auxiliary, and often unexplored, hypotheses and assumptions.

    This is all very nicely argued, but seems to be based on a straw man version of scientists and skeptics. Sure, there are idiots everywhere. But everybody who can align three brain cells in a row will know that parsimony applies under the assumption of "all else being equal".

    As long as we don't know that sock gnomes exist, it makes sense to assume that we simply sometimes misplace our socks. Once we find evidence for the existence of the gnomes, we revise our inference about the possible fate of those socks. Well, duh. If you find anybody who actually thinks that the non-existence of something for whose existence we have clear evidence is still the preferable assumption (because it make things simpler???), please point them out to me so that I can marvel at their insanity.

    ReplyDelete
  24. Massimo (and Sober, and a number of others who have made criticisms along these lines) is, of course, correct that the Razor does not in general allow us to select the true theory out of a set of alternatives. But it is probably too hasty to conclude from this that no general justification for the Razor can be given, independent from specific theoretical contexts. For example, over the last ten years, Kevin Kelly, Clark Glymour, and Conor Mayo-Wilson have developed a candidate general justification of the Razor, based on a number of theorems that show it to be the uniquely maximally efficient method of theory selection that converges on the truth in the long run (cf. e.g. Kelly, 2007, Phil. Sci. 74:5).

    In the forthcoming Statistics volume of the Handbook of the Philosophy of Science, Kelly describes the upshot of these theorems as follows: "Ockham’s razor does not point at the truth, even with high probability, but it does help one arrive at the truth with uniquely optimal efficiency, where efficiency is measured in terms of such epistemically pertinent considerations as the total number of errors and retractions of prior opinions incurred before converging to the truth and the elapsed times by which the retractions occur. Thus, in a definite sense, Ockham’s razor is demonstrably the uniquely most truth-conducive method for inferring general theories from particular facts — even though no possible method can be guaranteed to point toward the truth with high probability in the short run." These are very interesting results, and deserve a mention in this discussion.

    ReplyDelete
  25. Jura, thanks for the reference, I'll look into it.

    Alex, your skepticism on these matters keep being amusing. But this isn't a question of misplaced socks, it's about issues like individual vs. group selection, or phylogenetic reconstruction, which are neither simple nor obvious.

    ReplyDelete
  26. Massimo:

    Interesting article – and discussion – as always and introduced with an amusing “book” subtitle (“a parsimonious shave”) – provides a bit of an echo of "Dr. Dobb's Journal of Tiny BASIC Calisthenics & Orthodontia: Running Light without Overbyte".

    But one thing I found particularly interesting was the information from Jura indicating some “proof” that the “Razor, based on a number of theorems … show it to be the uniquely maximally efficient method of theory selection that converges on the truth in the long run”. Reminds me of your discussion on “multiple adaptive peaks in genotypic space” in the “Limits of Reasonable Discourse” blog: one might suggest – though I’m not at all familiar with Jura’s reference (Kelly, et. al.) – that the Razor won’t necessarily converge in all cases.

    Seems that should be an example of algorithmic procedures which any number of sources seem to argue are generally not guaranteed to complete or halt or be any better than a random search in all cases: local peaks can certainly be found with one algorithm or another, but there are no guarantees on always finding the highest.

    Reminds me of an aphorism, paraphrased, of Aristotle (I think): It is the mark of a civilized mind to rest satisfied with the degree of precision that a subject admits and to not seek exactitude where only an approximation to the truth is possible. As someone else put it, “any theory can be at best an approximation to the truth”; job security for theoreticians. Although that does raise the interesting question, I think, as to how one can know how closely one is to the truth unless one has it for reference …

    Also interesting were your comments on Plato’s “hypostatizing The Good”. I wonder whether or not that would qualify as a logical fallacy – reification. And / or converting or using “Good” as a noun instead of an adjective. Quite easy, I think, to see some commonality in a “good musician” and a “good general” in terms of closer approaches to the respective ideals, but not so easy to see meaning or value in “The Good” [being a “good thief” is something to aspire to?]. Although maybe that is a case of ellipsis, like talking of “trade goods” where they imply the ability of products to meet a particular purpose.

    ReplyDelete
  27. Massimo,

    I do phylogenetic reconstruction for my living (among other things), and I am not skeptical about what you wrote, I just cannot figure out how it isn't self-evident. To rational people at least.

    There was actually a PhD student at an institution where I was earlier who did not understand the concept of parsimony at all. When discussing why phylogenies are accepted as the best inference we have despite not knowing how it happened ('cause we weren't there!) she would straight-facedly say well, why couldn't a daisy-like ancestor have evolved into a cabbage and then back into a daisy? We just don't know, so it is all bunk.

    No surprise, she ultimately dropped out of science. (No surprise, she is also a theist. No problems with just "knowing" things there.)

    You cannot reason with the unreasonable. But I have yet to see a phylogeneticists who, when faced with incontrovertible evidence that the daisies are polyphyletic, would still insist that they are monophyletic because that would be more parsimonious or something. (Quite apart from the fact that, by definition, it would not be more parsimonious any more if such evidence surfaced, and quite apart from the fact that likelihood-based approaches are used for molecular phylogenies anyway.)

    ReplyDelete
  28. Alex, I know that's your field, but once again your example is an extreme situation that has nothing to do with what Sober is talking about. First off, technically parsimony as understood in cladistics isn't exactly the same thing as Ockham's razor (assuming that the latter can be given a coherent definition). Second, cladistic parsimony is simply not a good tool when it comes to complex speciation processes, such as the hybridization and polyploidization typical of plants and fungi. But more to the point, it is simply a different thing from parsimony applied, say, to the levels of selection. All of this may sound obvious to you, but if you read Sober's entire paper you should appreciate that even smart people like George Williams were somewhat confusing on what exactly Ochkam's razor is.

    ReplyDelete
  29. Hm... The way I learned the principle was a bit different, maybe that's the "all other things equal" you guys are talking about? The way I learned the principle of parsimony was:

    If two theories explain the data equally well, prefer the simpler theory.

    So, does that qualify as "ceteris paribus"?

    I don't see much of a problem with this formulation, although of course choosing the simpler for the sake of being simpler does not "prove" it correct or otherwise. And obviously deciding whether the data is equally explained by the different theories might not always be easy.

    Or am I way off my depth here? :-)

    ReplyDelete
  30. J, you are not far off the mark. Here is the problem: it is often not easy to tell which of two theories explaining the data well is "simpler," because simplicity is not just a function of the theory itself, but also of the ancillary assumptions that accompany each theory. Sober's point is that those ancillary conditions need to be evaluated case by case, so that the principle is not a general logical rule and needs to be applied carefully.

    ReplyDelete
  31. J,

    I see it like this. You do not need more elements to explain the same. Like Laplace said. And I think it makes a lot of sense.

    It is not about the correctness of the theory. The theory has explained correctly the data, and in this sense it is correct. And you do not need to add something else.

    On top of that, adding something else could produce extra effects that are not reflected in the data. Once new data comes, then you add more elements.

    I am not an expert on advanced biology, but I guess you can apply it as well.

    ReplyDelete
  32. Massimo,

    but the assumptions already made in principle have been already checked, so you should be in the safe road, like in the example you put of Einsteins general relativity. On top of that, the problem of testing all together is quite hard I think. Again, in the example of Einsteins relativity, how do you test the theory of light you are assuming, and all the other ones together? To me, it seems not sensical, or awkward.

    In bayesian statistics, the results of previous experiments/observations are now the priors, but still, it is not easy at all to implement this in your analysis. On the other hand, in papers it is not clear to me how this is done.

    ReplyDelete
  33. Oscar, of course it's hard, but actually scientists do proceed that way, and as Sober shows, this is quite compatible with a Bayesian framework. In Nonsense on Stilts I discuss the case of the discovery of Neptune, when astronomers retained Newton's mechanics despite the fact that it was consistently producing the wrong predictions about Uranus' orbit. But several years later, when the same orbital anomaly problem arose with Mercury, they actually jettisoned Newton in favor of general relativity. The background conditions had changed.

    ReplyDelete
  34. A few others have already made this point, (eg J) - I never teach the razor as ".. as a matter of general practice, the simplest hypothesis is the most likely one to be true? "

    But instead tell students that parsimony doesn't correlate with truth, it is merely a rule of thumb to chose among competing hypotheses that explain the data equally well. ("all things being equal" as has been pointed out)

    Ironically, the Parsimony model in phylogenetics is actually less parsimonious a model than all of the commonly used 'common-mechanism' likelihood models (used in maximum likelihood and Bayesian analyses). This is because Parsimony is statistically pathological due to its behavior of adding new parameters to its model with every new datum added. The analogy in regression is to fit a line to a scattering of points by running the line through every datum. This yields an amazing fit to the data, but the equation is incredibly complex and has no predictive value.

    ReplyDelete
  35. Massimo, and is it possible to implement this in a mathematical way? I still can not imagine how to do that in the example of the post of general relativity. Optics, stars and technics.

    So, how to put the three former auxiliary hypothesis and general relativity in a mathematical way.

    That's what I don't see clearly. In literature, I don't get it neither. Maybe they are doing something similar, or I am missing completely the point.

    ReplyDelete
  36. Massimo ... this has me thinking more about group selection and the amount of denigration it's often faced.

    Maybe there are more examples of group selection than we know, in part because an overapplication of Occam's Razzor has led many evolutionary biologists to be too dismissive.

    ReplyDelete
  37. Oscar,

    two points: first, the concept can be framed in a Bayesian framework, so it *can* be put mathematically. Second, what if it couldn't? Why would that be the litmus test? Plenty of good scientific ideas are not framed in mathematical terms, so what?

    ReplyDelete
  38. Massimo,

    I am interested in this aspect. That's why I asked. But you are right.

    To me, it is interesting because in the example of Eddington, you would like to put it mathematically in order to quantify and make a decision.

    ReplyDelete
  39. "So, Ockham’s razor is a sharp but not universal tool, and needs to be wielded with the proper care due to the specific circumstances."

    One should always be careful when wielding sharp objects. :)

    ReplyDelete
  40. When you use elliptical orbits as an example of the "simple" answer being wrong, you're implying that "simple" means "conveniently fitting our habits of thought." Which sounds a little like Plato!

    Nothing in the universe is a perfect geometric form such as a circle. Orbits aren't even pure geometric ellipses, as they wobble in response to all kinds of influences. Even something as crude as Newtonian mechanics shows us that the real wobbling universe is a more powerfully simple idea than any Platonic/Ptolemaic expectations of geometric simplicity. If orbits were perfectly circular, this would raise a whole lot of other questions, but their predictably distorted and wobbling nature shows that they're following the same rules as baseballs and boulders. That's simplicity!

    Jarrett Walker, urbanist.typepad.com, humantransit.org

    ReplyDelete
  41. Seems like Occam's Razor at the very least must mean:

    "All other things being equal, the simpler explanation is preferable to the more complicated one."

    It seems to me hard to argue with that formulation of Occam's Razor. We may find that all other things aren't equal (e.g., the simpler theory makes less accurate predictions than the more complex one), and in that case adopting the more complex option may be fully consistent with Occam's Razor.

    ReplyDelete
  42. i just have to say as a practicing scientist, this just feels like a restatement of the obvious. Ockham's razor, as I and everyone else I know applies it, dictates that you favor the simplest explanation/mechanism that is consistent with the available evidence. It can only prove by subtraction. It does not "prove" a more complicated explanation wrong, but the simpler explanation renders it superfluous until a contradictory piece of evidence becomes apparent.

    am I missing something?

    ReplyDelete
  43. I wonder if it's fruitful to go down information theoretic paths. The Ockham answer perhaps is the one which can be minimally encoded. (it is known that there is no algorithm guaranteed to always find the minimal encoding...) The auxiliary hypotheses might help determine the representation of the encoding. It's been a while since I last looked at algorithmic info theory.. perhaps Chaitin or Vitanyi or Rissanen have answered this already.

    ReplyDelete

Note: Only a member of this blog may post a comment.