Wednesday, April 27, 2011

An experimental study of rationalization in politics

by Massimo Pigliucci
[This is a partial excerpt from my forthcoming book: The Intelligent Person’s Guide to the Meaning of Life, BasicBooks, New York]
Every skeptic worth her salt knows about the basic logical fallacies, both of the “formal” kind (e.g., affirming the consequent) and of the “informal” variety (e.g., straw man). It rarely happens, however, that we get empirical studies on how many types of bad reasoning people actually deploy in everyday life, and how often. In preparation for a chapter of my forthcoming book on the meaning of life (as you know, I have usually tackled small and highly focused subject matters), I came across a fascinating study by Monica Prasad’s and colleagues at Northwestern University. The subjects in Prasad’s study were people who believed — against all evidence to the contrary — that there was a link between the former Iraqi dictator Saddam Hussein and the terrorist attacks of September 11, 2001 on American soil.
The researchers focused on this particular politically based belief because, as they put it, “unlike many political issues, there is a correct answer,” and because that belief was still held by about 50% of Americans as late as 2003 — despite the fact that President Bush himself at one point had in fact declared that “this administration never said that the 9/11 attacks were orchestrated between Saddam and Al Qaeda.” This isn’t a question of picking on Republicans, by the way, as Prasad and her colleagues wrote that they fully expected to find similar results had they conducted the study a decade earlier, targeting Democratic voters’ beliefs about the Clinton-Lewinsky scandal.
The hypotheses tested by Prasad’s group were two alternative explanations for why people hold on to demonstrably false beliefs: the “Bayesian updater” theory says that people change their beliefs in accordance to the available evidence, and therefore that a large number of people held onto the false belief of a connection between Hussein and 9/11 because of a subtle concerted campaign of misinformation by the Bush administration (despite President Bush’s statement above).
The alternative theory tested by Prasad and collaborators was what they called “motivated reasoning,” a situation in which people deploy a battery of cognitive strategies to avoid having to face the fact that one of their important beliefs turns out to be factually wrong. The results of the study are illuminating well beyond the specific issue of Hussein and 9/11, as the same “strategies” are deployed even by well informed and educated people in a variety of circumstances, from the political arena to the defense of pseudoscientific notions such as the alleged (and non-existent) link between vaccines and autism.
The first thing that Prasad et al. found was that, not surprisingly, belief does translate into voting patterns: interviewees who answered the question correctly (i.e., who knew that there was no demonstrated connection between Saddam Hussein and the 9/11 attacks) were significantly less likely to vote for Bush and more likely to vote for Kerry during the 2004 Presidential elections. I will leave it up to you to consider whether the a priori likelihood of this fact may in any way have influenced the Bush campaign’s “equivocal” statements concerning said link.
Of the people who did believe in the connection, how many behaved as Bayesian updaters, changing their opinion on the matter once presented with President Bush’s own statement that there was, in fact, no connection? A dismal 2%. The rest of those who stuck with their original opinion, evidence to the contrary be damned, deployed a whopping six different defensive strategies, which Prasad and her colleagues characterized in detail. Here they are, in decreasing order of importance:
Attitude bolstering (33%): or, as Groucho Marx famously put it, these are my principles, if you don’t like them, I’ve got others. This group simply switched to other reasons for why the US invaded Iraq, implicitly granting the lack of a Hussein-9/11 connection, and yet not being moved to change their position on the actual issue, the Iraq war.
Disputing rationality (16%): as one of the interviewed put it, “I still have my opinions,” meaning that opinions can be held even without or against evidence, simply because it is one’s right to do so. (Indeed, one does have the legal right to hold onto wrong opinions under American law, as it should be; whether this is a good idea is an entirely different matter, of course.)
Inferred justification (16%): “If Bush thinks he did it, then he did it,” i.e. the reasoning here is that there simply must have been a rationale for the good guys (the US) to engage in something so wasteful of human life and resources as a war. The fact that they couldn’t come up with what exactly that reason might have been did not seem to bother these people very much.
Denying that they believed in the link (14%): these are subjects who had said they believed in the link between Iraq and 9/11, but when challenged they changed their story, often attempting to modify their original statement, as in “oh, I meant there was a link between Afghanistan [instead of Iraq] and 9/11.”
Counter-arguing (12%): this group admitted that there was no direct evidence connecting Saddam Hussein and the terrorist attacks, but nevertheless thought that it was “reasonable” to believe in a connection, based on other issues, such as Hussein’s general antipathy for the US, or his “well known” support of terrorism in general.
Selective exposure (6%): finally, these are people who simply refused to engage the debate (while still not changing their mind), adducing excuses along the lines of “I don’t know enough about it” (which may very well be true, but of course would be more consistent with agnosticism on the issue).
How is any of the above possible? Are people really that obtuse that they will largely fail to behave as “Bayesian updaters,” i.e. to take the rational approach to assessing evidence and belief? There is no need to be too hard on our fellow humans — or indeed ourselves, since we likely behave in a very similar fashion in a variety of circumstances. What is going on here is that most of us, most of the time, use what cognitive scientists call “heuristics,” i.e. convenient shortcuts or rules of thumb, to quickly assess a situation or a claim. There is good reason to do this, since on most occasions we simply do not have the time and resources to devote to serious research on a particular subject matter, even in the internet and smart phone era of information constantly at your fingertips. Besides, sometimes we are simply not motivated enough to do the research even if we do have the time — the issue might not be important enough compared to our needs to get the groceries done or the car cleaned.
Unfortunately, it is also heuristically efficient to stand by your original conclusion once reached, no matter how flimsy the evidence you considered before reaching it. Again, this is simply a matter of saving time and energy. As a result, we use politicians we trust, political parties, or even celebrities as proxies to make up our mind about everything from the war on Iraq to climate change science — and once we adopt a position on any of these subjects, if challenged we deploy our cognitive faculties toward deflecting criticism rather than engaging it seriously.
This has been demonstrated on a variety of occasions well before the Prasad study. For instance, following the heuristic “if someone who seems to know what he is talking about asks me about X, then X likely exists, and I should have an opinion about it,” people volunteer “opinions” (i.e., they make up stuff out of thin air) concerning legislation that does not exist, politicians that do not exist, and even places that do not exist! Which is what happened in the Prasad study: many people apparently used the heuristic “if we went to war against country X, then country X must have done something really bad to us,” i.e. there must be a reason (even if I can’t think of one)!
There is serious doubt whether humans are — as Aristotle maintained — the rational animal, and we may not be the only political animals either (another of Aristotle’s characterizations of humanity), considering the mounting evidence on the politics of chimpanzees and other social primates. Nonetheless, both politics and rationality play a very important part in defining what it means to be human, and hence in giving meaning to our existence. The next time we find ourselves vehemently defending a political position, though, we may want to seriously ponder whether we are behaving as Bayesian updaters or whether — possibly more likely — we are deploying one of the six rationalizing strategies highlighted above. If so, our internal Bayesian calculator may require some tuning up. It would be good for us, and it would be good for our society.

53 comments:

  1. I always thought the reason was because of the U.N. resolutions that Hussein broke set up after the Gulf war? If not, then what would be the purpose of a U.N resolution if they aren't enforced?

    ReplyDelete
  2. U.N. Resolutions are another "Authority of Convenience" used when they suit your purposes and ignored otherwise.

    Tom

    ReplyDelete
  3. Unfortunately most people have an opinion first and then look for justification second, instead of the other way round. I wrote about this phenomenon myself (click here) recently.

    BTW, 34a61e92... what do your friends call you? just "34a"? :p

    ReplyDelete
  4. Sounds like we need a heuristic evaluation heuristic.

    ReplyDelete
  5. RegCogitans: I recommend calling him "guid".

    As a web developer, I see those on a daily basis. Of course, they're not meant to be used in conversation.

    ReplyDelete
  6. While I find these studies fascinating, and it's good that they are getting publicized (I've heard of this one from at least three other skepticism-related sources), I think that there's a troubling problem with the way we approach them, which is that we don't consider research about how to actually combat these biases. Simply telling people about such biases has an inconsistent effect. While self-criticism may make people more rational, people often take an attitude of assuming that everyone else is more biased than they themselves are.

    I also suspect that this is the case with respect to the Dunning-Kruger effect and considering ourselves well-informed; we can all think of things that other people are bad at, but think that they are good at. But naturally (tautologically?) we can't think of things that we think we are good at but are actually bad at.

    Or, to let Feynman say it faster: "The first principle is that you must not fool yourself--and you are the easiest person to fool. So you have to be very careful about that."

    I think that the world might be a better place if we valued skepticism a bit more as a form of personal virtue or skill, as opposed to treating it primarily as a social/political tool, which is what often seems to happen.

    I think it's also worth pointing out that some of these biases seem to be strongly linked to defensiveness and protection of one's personal/cultural identity. I think that a big part of persuading people away from irrational beliefs has to come from making it appear safer to change their minds, than to stick with a bad proposition.

    Which is perhaps something to consider about oneself as well. Do you support propositions because of their personal moral and emotional value to yourself, or do you attempt to restrain your self-identification with an idea, in order to approach it more objectively?

    ReplyDelete
  7. We do our Bayesian updating subjectively rather than objectively. We don't adjust our subconscious premises to fit new facts, we simply distrust the accuracy or sufficiency of facts that don't fit.
    We have no good way to monitor this process consciously, so unless we are among those who I'd call "subjectively astute," to make a long story short, we're screwed.

    ReplyDelete
  8. Baron, except that even Bayesian subjectivism is supposed to converge to true posteriors. These people are simply not doing the updating at all.

    Sean, yes, good points indeed, particularly the one about skepticism as a personal value (as opposed to, say, faith). Still, knowing thyself is the first step into doing something to change thyself...

    ReplyDelete
  9. The most rational course of action is for non-experts to believe in a proposition to the degree that there is expert consensus. This is how I will be voting in 2012.

    ReplyDelete
  10. Massimo, you list the 33% using "Attitude Bolstering" as people who stuck to their original opinion on the Iraq/911 link. However, your description of the Attitude Bolstering is that these people did in fact change their opinion on the Iraq/911 link (at least implicitly) but stuck to a different opinion (justification for invasion).
    Is this really relevant to the original question? Or is there some assumption that they sort of accepted the evidence but not really?

    ReplyDelete
  11. Massimo, they are updating their defenses against what they suspect are duplicitous attacks against the strength of their convictions.
    In other words they're not astute enough to turn such facts to their advantage.
    Something not to be trusted will not converge to true up the posterior.

    ReplyDelete
  12. Baron, that's another way to say that they are not Bayesian updaters.

    Eric, good point. I guess the authors felt that those subjects didn't change their mind on the underlying question, which was the justification for the Iraq war.

    ReplyDelete
  13. No, it's another way of saying that either we are all Bayesian updaters, and some better at it than others, or the updater theory itself is to that extent wrong.
    And in fact where the trustworthiness quotient for the source of data has not been made a determinative factor, the theory will be wrong.

    ReplyDelete
  14. Baron, no, there is no such thing as a better or worse Bayesian updater. It's an algorithm, not a subjective choice.

    ReplyDelete
  15. Then the algorithm doesn't work as advertised. Even those who do the type of updating that fits your specifications will be required to factor in the degree to which they trust the source.

    ReplyDelete
  16. And Massimo, who was it that called it the Bayesian updater theory, if that wasn't you? A theory about an algorithm that in and of itself was not theoretical?

    ReplyDelete
  17. Baron,

    the term Bayesian updater is not mine, it's found in the paper. The algorithm works fine, the people don't.

    ReplyDelete
  18. I see. The algorithm that's supposed to work on all people has been subverted of its purpose by the people that it doesn't work on.

    ReplyDelete
  19. Bayesian updating is just a way to decide if you should change your mind when you come across new evidence. When you rationalize, you're doing something else.

    ReplyDelete
  20. "If so, our internal Bayesian calculator may require some tuning up. It would be good for us, and it would be good for our society."

    That may be true, but how likely do you think it is? I'd generously ballpark the odds at somewhere near 1%.

    I am getting the impression that you are looking at our race's impending (yet agonizingly slow), self-inflicted doom with some regret rather than the proper degree of Malthusiasm.

    Eat, drink and be merry for tomorrow brings air choked with the heat of carbon dioxide, oceans choked with plastic and soil choked with poison.

    ReplyDelete
  21. Thameron, I gotta give it to you: you are an incurable optimist.

    Baron, your fixation with purpose has a tendency to get the best of you. Bayesian theory has to do with probabilities, it doesn't have the purpose of making human beings more rational. But it is one way to measure justnhow irrational we are. And the answer seems to be: a lot.

    ReplyDelete
  22. Hm, interesting.

    Not sure if the distinction of people into Bayesian updaters and and motivated "reasoner" makes so much sense. Intuitively, I'd suspect everybody does both, the first with beliefs they are less attached to, and the latter with beliefs that are important for their tribal identity.

    The question would be how to have more people consider rationalism to be the core of their tribal identity instead of affiliation to a party, religion or country, because then they would use the updating approach for considerably more topics.

    ReplyDelete
  23. Massimo, I was making reference to what you seem to think was the algorithm's predictive purpose, in the sense that Bayesian theories of probability have to do with the accuracy of their predictive functions. All algorithms serve a predictive purpose, some more accurately or effectively than others.

    Of course you don't seem to know that you think in terms of purpose, so of course it's point not taken.
    And what I'm "fixated" with is the ways you as a professional philosopher have found to get around it's presence as the motivating principle of behavior. Maybe it's the problem with purpose that you had as a biologist in relation to the natural selection process. You could look into the purpose of instructive theories of adaption as an antidote.
    Or not.

    ReplyDelete
  24. Alex, I'm sure the authors wouldn't argue that some people *always* behave as Bayesian updaters while others *always* don't. You are correct, it will be context-dependent. And yes, the big question is how to change people's values. Making critical thinking courses mandatory at the high school level would be a way to start (together, perhaps, with a course on cognitive biases).

    Baron, just for fun, try not to write, utter or think the word "purpose" for, say, a week. I bet you can't do it.

    ReplyDelete
  25. Massimo,
    Sure I can - just use the synonyms as you must do:
    motive, motivation, grounds, cause, occasion, reason, point, basis, justification. intention, aim, object, objective, goal, end, plan, scheme, target; ambition, aspiration. advantage, benefit, good, use, value, merit, worth, profit, function, role.

    ReplyDelete
  26. As I said, it's an obsession.

    ReplyDelete
  27. But I say it's a special obligation.

    ReplyDelete
  28. Massimo:
    I'll be looking forward to your new book. Let us know when it's available for preorder.

    ReplyDelete
  29. No worries, I will! BasicBooks should get the manuscript in May or early June, I hope that - not being an academic publisher - they'll be able to hit the bookstores by end of year.

    ReplyDelete
  30. After all, what would life be like without its meaning?

    ReplyDelete
  31. It would be sleep, eat, reproduce. Which ain't bad, actually...

    ReplyDelete
  32. Its meaning being that it could be worse?

    ReplyDelete
  33. It sounds like they probably did it right in the study you mention, but I get worried by studies that are based on factual questions that are more subtle than they may appear.

    If they ask whether there were demonstrated links between Saddam and 9/11, then the answer is no. But if they ask whether there were demonstrated links between Saddam and al Qaeda, then the answer is yes. (They may be weak links, but the correct answer is still yes.) So this makes me worry about how well respondents understood the question.

    There was a similar issue with studies asking Fox News viewers if we found weapons of mass destruction in Iraq. The correct answer is yes. But if they had asked whether we had found any recently produced WMDs, then the answer would be no. So in this case too, we're left without a solid interpretation of the results.

    One thing I wonder is whether people are biased, based on their political leanings, toward which question they think they are being asked. One way that could be tested is to ask some people the question with the other answer, i.e., ask them whether there were links between Saddam and al Qaeda rather than 9/11. It would be interesting to see if they respond in the same manner as you describe when corrected in this case also.

    ReplyDelete
  34. I think in general that faith is powerful, if not quite powerful. I don't know how to make an study, because you can focus in the data you want and contruct the evidence based on that.

    Even in politics, and if the party you like has made something wrong, let's say, you will try to minimize it, finding the successes of your party, which you would consider evidence, to finish in the end where you wanted to end.

    ReplyDelete
  35. The human brain makes many errors when
    processing the data it's given. I think
    there is a similarity between rationalization
    in politics and how the brain deals with
    optical illusions. Even after you know what the truth is, you still see it incorrectly.

    Or consider what happens with anosognosics or schizophrenics. All you need is some bad "wiring" in the brain (which everyone probably has to some degree) and just about anything can seem reasonable.

    ReplyDelete
  36. Although, I do find the above useful and interesting there is something about this that nags me. I do not subscribe to the opinion that Saddam was in league with al Qaeda, but what if there was a connection and we just have no evidence to back it up. I guess we take it on faith, but if it is true then is it not reasonable. Saddam was our enemy. Bin Laden was our enemy. They both financed terrorism and were both in relative proximity to each other, that is geographically and in their hate for the US. Is it that unreasonable to think that these two forces joined together to bring down the US? Of course, this sounds like a conspiracy theory and those who pride themselves on being "rational" and "right" always think they are right to not accept a claim because of the "craziness" of it. But, it could be true nonetheless. So, I think people on both sides of the rational/irrational divide are not so different. Also, being prior military, I find it odd that people who have never served in the military or have been to war think they are best to criticize war. War is destructive and people get hurt. But, war happens and it happens for various reasons. I have met a lot of people who love what they do and do not mind doing it. A lot of people I served with would love the opportunity to go to war and many do multiple tours overseas. Are these people not rational or reasonable? War is like mankind's past time. The point is, we have an all volunteer military. These people should know what they are signing up for and if they are not prepared, then do not sign up. Am I defending war? Yes I am. Not every one chooses to sit in an office or a lab and criticize the actions of others because we have a problem with it and it does not fit our own lifestyle. The military is a lifestyle and it's own subculture. We thrive on action and the fight. Come to think of it, much of what we do outside of war is like war, for example sports, politics, etc... When you are prepared to fight and die for your convictions then I think you have grounds to criticize, but then you would say that that is unreasonable or terroristic-thinking or use the excuse given by Bertrand Russell, that is to not die for ones beliefs, because they could be wrong.

    ReplyDelete
  37. See, Why Do Humans Reason? Arguments for an Argumentative Theory. As it says at Edge
    "Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments."

    ReplyDelete
  38. Reasoning was probably not designed by evolution to win arguments, because we didn't have arguments until very recently in biological history. More likely, it was designed to come up with heuristics for quick decision making, which is why it fails if untrained and unchecked in modern society.

    ReplyDelete
  39. Reasoning no doubt developed for a variety of reasons, many of which involved manipulation and persuasion, and began evolving together with communication. Which would have happened long before we had the conscious or subconscious need to find heuristics. (If in fact some elements of heuristics did not precede the development of reason.)

    ReplyDelete
  40. Hmm, thought I posted this yesterday; don't know why it wouldn't get through moderation.

    Massimo, this would have been a good place to link to Chris Mooney's article in Mother Jones.

    ReplyDelete
  41. Professor, I believe you hit the nail square on the head with this one. Though I've doubted the foundational logic of Bayesian analysis from the moment I learned of it, I see nothing of signifcance I can add to or take away from your post.

    ReplyDelete
  42. Baron, my point was that it couldn't possibly have to do with "arguments" because that requires language, which evolved late.

    Gadfly, good point. Here is the article by Chris: http://goo.gl/QxPgR

    Justin, not sure what problem you have with Bayesianism, but Bayesian theory is indisputably correct in terms of probability theory. As a form of epistemology of course things are more complex, but even philosophers who don't consider themselves card carrying Bayesians agree that it has much to contribute to epistemology and philosophy of science.

    ReplyDelete
  43. >"but even philosophers who don't consider themselves card carrying Bayesians agree that it has much to contribute to epistemology and philosophy of science."

    I see your point and it is also possible that my childhood idolization for Einstein contributes to my skepticism of uncertainty principle and subsequently [perhaps ironically] quantum mechanics. This could also be an example of me being “so obtuse” as to not become a Bayesian updater. It just seems to me that there is some dynamic that we simply haven’t discovered yet.

    ReplyDelete
  44. See also: The Argumentative Theory
    A Conversation with Hugo Mercier [4.27.11]

    http://www.edge.org/?q=con-detail&cid=451

    ReplyDelete
  45. So Massimo, why did you let this post come through but not the one I made much earlier today about the same theory?

    ReplyDelete
  46. Massimo said: ...the big question is how to change people's values. Making critical thinking courses mandatory at the high school level would be a way to start (together, perhaps, with a course on cognitive biases).

    Having just read Chris Mooney's essay (thanks for the link), I'm wondering how your suggestion fits with his conclusion that: "If you want someone to accept new evidence, make sure to present it to them in a context that doesn't trigger a defensive, emotional reaction."

    In other words, if you choose the right frame, it's amazing how reasonable people become! But where's the evidence that simply teaching about cognitive biases and how to think in order to think "critically" makes any difference? Based on the Kahan study that Mooney cites (as well as other sources that I recall), it seems reasonable to only expect that people will use that knowledge "to generate more and better reasons to explain why they're right" and to recognize fallacies only in their opponents.

    OK, maybe that's a little too pessimistic. But cultural-value frameworks (conservative/progressive or hierchal/egalitarian and individualist/communitarian) seem especially "sticky", and less like products of pure reason (if such an idealized process existed) and more like products of repetition and trauma; i.e. two (not particularly rational or intellectual) conditions that physically change brains.

    So, while I would certainly agree that teaching critical thinking skills is valuable in its own right (and helps to explain why progressives like us reject science denialism and conspiracy theories - even when it comes from the left), it seems likely to me that something more is needed in order to change people's values. And that "something more" is probably at least as emotional (or non-intellectual) as it is rational in basis.

    ReplyDelete
  47. Teach the Socratic method of argument. Instead of justifying their conclusions, have the divergent parties specify and justify their premises.

    ReplyDelete
  48. I've often thought that Socrates would have made a good lawyer, which is perhaps why the Socratic method is still used in law schools. But that only reminds me of Chris Mooney's borrowed analogy from psychologist Jonathan Haidt:

    We may think we're being scientists [when we're reasoning -jcm], but we're actually being lawyers. Our "reasoning" is a means to a predetermined end—winning our "case"—and is shot through with biases.

    Not that anyone (as far as I know) would argue that the Socratic method is a scientific method. And I seem to recall learning something about both back in my public school days (1970's & 80's), although I'm sure that the curricula (humanities & natural sciences, respectively) could have withstood improvement.

    ReplyDelete
  49. jcm
    For a look at how the Socratic method compares to the scientific, check this:

    http://www.niu.edu/~jdye/method.html

    ReplyDelete
  50. And at the law school I attended, the Socratic method was to be used as an adjunct to advocacy, and not as a justification for the adversarial process. And if I recall my history correctly, in Socrates time, advocacy was a part of the inquisitorial system then in vogue.

    ReplyDelete
  51. Massimo - I was not impressed with this study. While I agree with the basic premises that these strategies are often used in motivated reasoning, the methods of the study are odd. The response rate was very low, they specifically chose a poorly educated segment of the population, a lot of the discrepencies (reading their examples) seem to stem from general ignorance, or ambiguity in interpreting the survey. This makes the 2% figure all but worthless, in my opinion, and the authors admit their numbers do not generalize, even to their survey population, let alone the general population. I don't really think the data they collected support their hypothesis, and the percentages (except maybe relative to each other) are worthless.

    I also found the authors made a lot of simplistic reasoning, false dichotomies, simplistic assumptions, etc.

    ReplyDelete
  52. Steve, yes the study has problems, which as you say are acknowledged by the authors. And I certainly wouldn't rely heavily on the resulting percentages. But it does fit with other literature in political science and cognitive science, and I think it's a step in the right direction. Hopefully they'll be more.

    ReplyDelete
  53. As I said, I agree with the premises, largely because they are supported by other research. I just found the authors' reasoning to be wanting. They made a lot of simplistic assumptions about how their subjects came to their conclusions, and also about cause and effect.

    As just one example - they question whether voting for Bush caused belief in a link, or belief in a link caused voting for Bush. False dichotomy - how about the far more likely possibility that a third factor, conservative politics, led to both. And there are other causal arrows as well, such as conservative leanings being reinforced by conservative media, defending the tribe, and just saving face. It seems that some people did update their beliefs, but then used rationalization to save face, and the authors don't address this distinction.

    Again - I am not doubting the underlying assumptions. But I think this study was not up to the task it set out for itself.

    ReplyDelete

Note: Only a member of this blog may post a comment.