About Rationally Speaking

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Friday, September 04, 2009

The logic of skepticism

Being a skeptic is a rather lonely art. People often confuse you for a cynic, and I’m not using either term in the classical philosophical sense, of course. In ancient Greece, the cynics were people who wished to live in harmony with nature, rejecting material goods (the root of the word means “dog-like,” and there are various interpretations as to its origin). The Western equivalent of Buddhist monks, if you will. The skeptics, on the other hand, were philosophers who claimed that since nothing can be known for certain the only rational thing to do is to suspend judgment on everything. That’s not what I’m talking about.

A skeptic in the modern sense of the term, let’s say from Hume forward, is someone who thinks that belief in X ought to be proportional to the amount of evidence supporting X. Or, in Carl Sagan’s famous popularization of the same principle, extraordinary claims require extraordinary evidence. In that sense, then, what I will call positive skeptics do not automatically reject new claims, they weigh them according to the evidence. And of course we aren’t cynics in the modern sense of the term either, i.e. we don’t follow Groucho Marx when he famously said “Whatever it is, I’m against it!” (Of course, he was joking, though that seems to be the motto of the current Republican party.)

Now, you would think that few people would object to the pretty straightforward idea (which can actually be formalized using a Bayesian statistical framework) that one’s beliefs should be adjusted to the available evidence. You would also think it hard to disapprove of the corollary that — since the evidence keeps changing and our assessment of it is perennially imperfect — than one ought not to subscribe to absolute beliefs of any sort (except in logic and mathematics: 2+2=4 regardless of any “evidence”). Boy, would you be wrong!

For one thing, the positive skeptic finds herself more often (in fact much more often) than not in a position to (provisionally) reject a given claim rather than (provisionally) accepting it. Why, you might ask? Shouldn’t the expected likelihood of the truth of a claim a priori be something like 50-50, in which case the skeptic should accept and reject beliefs in about equal manner? No, as it happens, things aren’t quite that nicely symmetrical.

One way to understand this is to think about a simple concept that everyone learns in statistics 101 (everyone who takes statistics 101, that is): the difference between type I and type II error. A type I error is the one you make if you reject a null hypothesis when it is in fact true. In medicine this is called a false positive: for instance, you are tested for HIV and your doctor, based on the results of the test, rejects the default (null) hypothesis that you are healthy; if you are in fact healthy, the good doctor has committed a type I error. It happens (and you will spend many sleepless nights as a consequence).

A type II error is the converse: it takes place when one accepts a null hypothesis which is in fact not true. In our example above, the doctor concludes that you are healthy, but in reality you do have the disease. You can imagine the dire consequences of making a type II error, also known as a false negative, in that sort of situation. (The smart asses among us usually add that there is also a type III error: not remembering which one is type I and which type II...)

What’s that got to do with skepticism? Whenever confronted with a new claim, it’s reasonable to think that the null hypothesis is that the claim is not true. That is, the default position is one of skepticism. Now the tricky part is that type I and type II errors are inversely proportional: if you lower your threshold for one, you automatically increase your threshold for the other (there is only one way out of this trade-off, and that’s to do the hard work of collecting more data). So if you decide to be conservative (statistically, not politically), you will raise the bar for evidence, thereby lowering the chances of rejecting the null hypothesis and accepting the new belief when it is not in fact true. Unfortunately, you are also simultaneously increasing your chances of accepting the null and rejecting the new belief when in fact the latter is true.

Human beings are thus bound to navigate the treacherous waters between Scylla and Charybdis, between being too skeptical and too gullible. And yet, the two monsters are not of equal strength: if we accept the assumption that there is only one reality out there, then the number of false hypotheses must be inordinately higher than the number of correct ones. In other words, there must be many more ways of being wrong than right. Take the discovery that DNA is a double helix (the true answer, as far as we know). It could have been a single helix (like RNA), or a triple one (as Linus Pauling suggested before Watson and Crick got it right). Or it could have been a much more complicated molecule, with 20 helices, or 50. Or it may have not been a helicoidal structure at all. And so on.

So when trying to steer the course between skepticism and gullibility, it makes sense to stay much closer to the Scylla of skepticism than to bring our ship of beliefs within reach of the much larger and more menacing Charybdis of gullibility. The net result of this prudent policy, however, is that even positive skeptics are bound to reject a lot of beliefs, with the side effect that their popularity plunges. As I said, it’s a lonely art, but you can take comfort in the psychological satisfaction of being right much more often than not. This will not get you many girls and drinking buddies, though.

(Caveat: I have actually argued in a technical paper that we should abandon the whole idea of null hypotheses and embrace more sophisticated approaches to the comparisons of competing explanations. But that’s another story, and it doesn’t change the basic reasoning of this post.)


  1. You might like to look at the work of Fiona Fidler on null hypotheses, if you don't already know her work.

  2. Massimo

    The null hypothesis paper sounds really interesting. Has it appeared? If so where? If not can I get you to send me a copy, or is it posted anywhere on the net?

  3. Tony,

    the paper I was referring to is a chapter in my book with Jonathan Kaplan:


  4. I just started teaching a 9th grade biology course this year. I started off the class with the questions What is Science and What is Biology and discussed skepticism, and this article from Michael Shermer


    Bringing up the next generation of skeptics!

  5. One thing you don't address explicitly here are the relative risks inherent in type 1 and type 2 errors and how that should change our approach to belief acquisition (or at least our actions). So sometimes, it is important not to miss a "true positive" and sometimes it is important not to accept a "false positive." The technical details don't change, but our approach to the evidence should (although here the difference between *acting as if* and "believing that* might be important... How should our actions be related to our beliefs???

  6. This an exceptionally important point, and its vital even for trained scientists to be reminded of it, especially in medical fields where the number of claims are even higher and harder to prove than in other fields.

    A couple of my favorite papers on this subject:



  7. Let's take a particular case: the safety of drugs or chemical products.

    A skeptic in the modern sense of the term, let’s say from Hume forward, is someone who thinks that belief in X ought to be proportional to the amount of evidence supporting X.

    So what is X in this case? Is it "The chemical is safe" (X1)? Or is it "The chemical is harmful" (X2)? The manufacturer is likely to claim X1, while the health activist is likely to claim X2.

    Whenever confronted with a new claim, it’s reasonable to think that the null hypothesis is that the claim is not true. That is, the default position is one of skepticism.

    But in this case, it's not clear which claim should be treated with skepticism.

  8. Nick,

    right. That's precisely why I added the caveat at the end on the fact that there are better ways to think about this than null hypotheses.

    We should consider all competing claims as separate hypotheses, each to be individually weighed against the available information. Both Bayesian and likelihood analyses do this very well.

    Still, in the specific case you mention, if one wishes to remain within the "frequentist" approach (using a null hypotheses, p-values, etc.) one would probably say that no chemical is the default/safe and should be the null hypothesis, while adding a chemical has potentially negative effects. Analogously, in the case of a new drug, the null would have to be that the drug has no beneficial effects, and so on.

  9. Massimo,

    It may not be necessary to abandon null hypothesis testing altogether. There's lots of methodology for equivalence (and non-inferiority) testing within the frequentist framework. When testing the efficacy of a new drug versus an existing drug for the same condition, the null hypothesis is generally taken to be that the new drug is inferior to the old one. This is a choice that our society makes to put the burden of proof on the new drug.

    When it comes to safety (of drugs, chemicals, products ...), society also has choices to make. Are new products taken to be "innocent until proven guilty", or must they be demonstrated to be safe. Which of course begs the question, "How safe?". (The "margin of equivalence" is a specification of this.)

  10. "This will not get you many girls and drinking buddies, though."

    Ergo, all skeptics are hetero men or lesbians...


Note: Only a member of this blog may post a comment.