About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Friday, April 29, 2011

Lena's Picks

by Lena Groeger
* Carl Zimmer explores mental time travel, and why we must remember the past to envision the future. 
* A multinational project that attempts to explain religion, still in the “stamp-collecting” phase. They’ll probably be there for a little while.
* All about Christopher Hitchens and his riveting rhetoric. 
* “The lives of artists are more fragile than their creations.” The peril and necessity of artists who stand up against authoritarianism. 
* Our stereotypes of Mac vs PC people may have some truth…(and if not, they're still fun to look at!)
* “The human mind is simply terrible at politics.” Jonah Lehrer on why education might lead to less accurate beliefs. (It's also terrible at reasoning – Chris Mooney on the science of self-delusion.)
* How science literate are we? A panel discussion on the public understanding of science, communication efforts, and implications for policy and policy makers.
* How could we tell if animals had emotions?
* A long New Yorker piece on our perception of time, and why…it… slows… down… in moments of fear.
* It may not be rational, but could the speaking of athletes predict their future performance? Achievement Metrics seems to think so

Wednesday, April 27, 2011

An experimental study of rationalization in politics

by Massimo Pigliucci
[This is a partial excerpt from my forthcoming book: The Intelligent Person’s Guide to the Meaning of Life, BasicBooks, New York]
Every skeptic worth her salt knows about the basic logical fallacies, both of the “formal” kind (e.g., affirming the consequent) and of the “informal” variety (e.g., straw man). It rarely happens, however, that we get empirical studies on how many types of bad reasoning people actually deploy in everyday life, and how often. In preparation for a chapter of my forthcoming book on the meaning of life (as you know, I have usually tackled small and highly focused subject matters), I came across a fascinating study by Monica Prasad’s and colleagues at Northwestern University. The subjects in Prasad’s study were people who believed — against all evidence to the contrary — that there was a link between the former Iraqi dictator Saddam Hussein and the terrorist attacks of September 11, 2001 on American soil.
The researchers focused on this particular politically based belief because, as they put it, “unlike many political issues, there is a correct answer,” and because that belief was still held by about 50% of Americans as late as 2003 — despite the fact that President Bush himself at one point had in fact declared that “this administration never said that the 9/11 attacks were orchestrated between Saddam and Al Qaeda.” This isn’t a question of picking on Republicans, by the way, as Prasad and her colleagues wrote that they fully expected to find similar results had they conducted the study a decade earlier, targeting Democratic voters’ beliefs about the Clinton-Lewinsky scandal.
The hypotheses tested by Prasad’s group were two alternative explanations for why people hold on to demonstrably false beliefs: the “Bayesian updater” theory says that people change their beliefs in accordance to the available evidence, and therefore that a large number of people held onto the false belief of a connection between Hussein and 9/11 because of a subtle concerted campaign of misinformation by the Bush administration (despite President Bush’s statement above).
The alternative theory tested by Prasad and collaborators was what they called “motivated reasoning,” a situation in which people deploy a battery of cognitive strategies to avoid having to face the fact that one of their important beliefs turns out to be factually wrong. The results of the study are illuminating well beyond the specific issue of Hussein and 9/11, as the same “strategies” are deployed even by well informed and educated people in a variety of circumstances, from the political arena to the defense of pseudoscientific notions such as the alleged (and non-existent) link between vaccines and autism.
The first thing that Prasad et al. found was that, not surprisingly, belief does translate into voting patterns: interviewees who answered the question correctly (i.e., who knew that there was no demonstrated connection between Saddam Hussein and the 9/11 attacks) were significantly less likely to vote for Bush and more likely to vote for Kerry during the 2004 Presidential elections. I will leave it up to you to consider whether the a priori likelihood of this fact may in any way have influenced the Bush campaign’s “equivocal” statements concerning said link.
Of the people who did believe in the connection, how many behaved as Bayesian updaters, changing their opinion on the matter once presented with President Bush’s own statement that there was, in fact, no connection? A dismal 2%. The rest of those who stuck with their original opinion, evidence to the contrary be damned, deployed a whopping six different defensive strategies, which Prasad and her colleagues characterized in detail. Here they are, in decreasing order of importance:
Attitude bolstering (33%): or, as Groucho Marx famously put it, these are my principles, if you don’t like them, I’ve got others. This group simply switched to other reasons for why the US invaded Iraq, implicitly granting the lack of a Hussein-9/11 connection, and yet not being moved to change their position on the actual issue, the Iraq war.
Disputing rationality (16%): as one of the interviewed put it, “I still have my opinions,” meaning that opinions can be held even without or against evidence, simply because it is one’s right to do so. (Indeed, one does have the legal right to hold onto wrong opinions under American law, as it should be; whether this is a good idea is an entirely different matter, of course.)
Inferred justification (16%): “If Bush thinks he did it, then he did it,” i.e. the reasoning here is that there simply must have been a rationale for the good guys (the US) to engage in something so wasteful of human life and resources as a war. The fact that they couldn’t come up with what exactly that reason might have been did not seem to bother these people very much.
Denying that they believed in the link (14%): these are subjects who had said they believed in the link between Iraq and 9/11, but when challenged they changed their story, often attempting to modify their original statement, as in “oh, I meant there was a link between Afghanistan [instead of Iraq] and 9/11.”
Counter-arguing (12%): this group admitted that there was no direct evidence connecting Saddam Hussein and the terrorist attacks, but nevertheless thought that it was “reasonable” to believe in a connection, based on other issues, such as Hussein’s general antipathy for the US, or his “well known” support of terrorism in general.
Selective exposure (6%): finally, these are people who simply refused to engage the debate (while still not changing their mind), adducing excuses along the lines of “I don’t know enough about it” (which may very well be true, but of course would be more consistent with agnosticism on the issue).
How is any of the above possible? Are people really that obtuse that they will largely fail to behave as “Bayesian updaters,” i.e. to take the rational approach to assessing evidence and belief? There is no need to be too hard on our fellow humans — or indeed ourselves, since we likely behave in a very similar fashion in a variety of circumstances. What is going on here is that most of us, most of the time, use what cognitive scientists call “heuristics,” i.e. convenient shortcuts or rules of thumb, to quickly assess a situation or a claim. There is good reason to do this, since on most occasions we simply do not have the time and resources to devote to serious research on a particular subject matter, even in the internet and smart phone era of information constantly at your fingertips. Besides, sometimes we are simply not motivated enough to do the research even if we do have the time — the issue might not be important enough compared to our needs to get the groceries done or the car cleaned.
Unfortunately, it is also heuristically efficient to stand by your original conclusion once reached, no matter how flimsy the evidence you considered before reaching it. Again, this is simply a matter of saving time and energy. As a result, we use politicians we trust, political parties, or even celebrities as proxies to make up our mind about everything from the war on Iraq to climate change science — and once we adopt a position on any of these subjects, if challenged we deploy our cognitive faculties toward deflecting criticism rather than engaging it seriously.
This has been demonstrated on a variety of occasions well before the Prasad study. For instance, following the heuristic “if someone who seems to know what he is talking about asks me about X, then X likely exists, and I should have an opinion about it,” people volunteer “opinions” (i.e., they make up stuff out of thin air) concerning legislation that does not exist, politicians that do not exist, and even places that do not exist! Which is what happened in the Prasad study: many people apparently used the heuristic “if we went to war against country X, then country X must have done something really bad to us,” i.e. there must be a reason (even if I can’t think of one)!
There is serious doubt whether humans are — as Aristotle maintained — the rational animal, and we may not be the only political animals either (another of Aristotle’s characterizations of humanity), considering the mounting evidence on the politics of chimpanzees and other social primates. Nonetheless, both politics and rationality play a very important part in defining what it means to be human, and hence in giving meaning to our existence. The next time we find ourselves vehemently defending a political position, though, we may want to seriously ponder whether we are behaving as Bayesian updaters or whether — possibly more likely — we are deploying one of the six rationalizing strategies highlighted above. If so, our internal Bayesian calculator may require some tuning up. It would be good for us, and it would be good for our society.

Tuesday, April 26, 2011

Michael’s Picks

by Michael De Dora
* An Illinois circuit judge has ruled that pharmacists with religious objections to the “morning-after” pill cannot be compelled to sell the product. 
* Here’s video from my recent public dialogue with Father Jonathan Morris on secular and religious morality.
* The New York Times discusses why none of the high-profile participants in the financial crash have faced legal punishment. 
* A new study in Nature casts doubt on the idea that human languages share universal features that are dictated by human brain structure.
* A long lost letter written by one of Abraham Lincoln’s best friends, William Herndon, sheds some light on the former president’s position on religion.
* Rep. Michelle Bachmann (R-Minn.) says God told her to introduce a constitutional amendment barring same-sex marriage. 
* Andrew Revkin on when rationalization masquerades as reason. 
* An agnostic woman details her trouble finding love in the Bible Belt (Nashville, Tennessee). 
* Unrelated to speaking rationally, but very cool: 100 incredible views out airplane windows.

Sunday, April 24, 2011

NPR on miracles

by Massimo Pigliucci
“A miracle is a violation of the laws of nature; 
and as a firm and unalterable experience has
 established these laws, the proof against a miracle,
 from the very nature of the fact, 
is as entire as any argument from experience 
can possibly be imagined.” (David Hume, On Miracles)


Well, I guess it was bound to happen: it is Easter weekend, and even National Public Radio had to broadcast some cheesy story about religion. Even so, I was not prepared for the amount of sheer nonsense that I heard from Barbara Bradley Hagerty over at Morning Edition.
The basic story was in fact heart wrenching: an 11-yr old boy, Jake Finkbonner of Ferndale, WA, back in 2006 experienced a terrible infection from flesh eating bacteria that his doctors thought would spell certain and painful death. Dr. Richard Hopper, Jake’s physician at Seattle Children's Hospital, told Hagerty that he had never seen such a serious case, that no matter how fast the surgeons were removing the boy’s skin to try to stop the infection, the bacteria kept spreading, at the alarming speed of up to half an inch in an hour.
The doctor did say some strange things during the interview for a medical practitioner, like “The infection was like it had a life of its own,” well, yes, it literally did! And understandably — if irrationally — the parents resigned themselves to the fate of their son by seeking comfort in their religion. As Jake’s mother put it, after having called a Catholic priest for the last rites, “Donny [the father] and I went off to the chapel and just surrendered Jake back to God. We just said, ‘God, he is yours. Thy will be done, and if it is your will to take him home, then so be it.’”
As I said, so far this is a tragic story which, remarkably, had a quasi-happy ending. The infection stopped as suddenly as it began, probably because of a combination of running its course and of the excellent medical treatment that Jake had received for two weeks (and a dozen surgeries).
The boy’s mother's comment, however, was “There's no question in my mind that it was in fact a miracle.” Why? Because in the meantime she had enlisted the prayers of parishioners at St. Joseph’s Catholic Church, who in turn had appealed to “Blessed Kateri,” a woman who lived 350 years ago and had apparently died of a disease that disfigured her face (my guess: Seattle and its excellent medical facilities hadn’t been built yet). In the hospital, Jake was visited by a representative of the Society of Blessed Kateri (!) who promptly gave Jake’s mother a pendant with the image of Kateri on it. That, apparently, is all it took (forget the two weeks of surgeries), a miracle was accomplished.
Here is where the story turns bizarre, casting a serious shadow on NPR and Barbara Bradley Hagerty’s professionalism as a journalist. She asked whether the Catholic Church will accept the miracle “explanation” as genuine, and said: “To qualify as an authentic miracle, the Vatican has to determine that Jake’s recovery was unexplainable and that it occurred because people prayed to Kateri to intercede with God on Jake’s behalf.”
Well, I’m sure you are all curious to see how exactly the Vatican might go about accomplishing such a, ahem, miraculous feat! No worries, Father Paul Pluth — who is doing the “investigation” of Jake’s case on behalf of that international nutcase, the Pope, elaborates: you see, according to the good Father, Kateri has special access to God, “[which] means we have received assurances that this person now stands in heaven before the throne of God. One of the evidences of that has been miracles of healing.” Clearly, the good Father didn’t take Logical Fallacies 101, or he would have recognized this as what philosophers call begging the question. But never mind, let us proceed with this increasingly ridiculous story that has brought NPR, temporarily one hopes, down to the level of Faux News.
Father Peter Gumpel, a Jesuit, explains that “these days, the bar is pretty high” (I swear, I’m not making this up!), as demonstrated by the fact that the Church dismisses 95% of the cases of miracles it receives. Such rigor is necessary because, you see, “the Vatican does not want to approve miracles lightly, thus misleading people or looking foolish if the miracle turns out to have a logical explanation.” You don’t say, Sherlock.
Gumpel assured NPR that “we do not want to submit to the Pope a statement unless we are absolutely, morally certain that this case merits to be approved by him as a miracle by God.” I’m sorry, did you say morally certain? One may wonder what morality has to do with ascertaining matters of facts, but the Jesuit was probably making a rather obscure reference to the Aristotelian concept of probabilistic certainty, which has little to do with morality as we understand it today. Regarding the latter, though, it is worth nothing that surgeons are still working on reconstructing the boy’s face, let us therefore thank God the infinitely merciful and moral.
Enter Eusebio Elizondoas, the Vatican’s “Devil’s Advocate” (these days his role is known with the more positive title of “Promoter of Justice”). This guy, in Hagerty’s story is — again, I kid you not — a “skeptic”! A well deserved title, considering the scientific approach used by Elizondoas: “I’m trying to really push every single witness [and asking] ‘Really, are you sure? Are you positive that there’s no other way to explain this, a logical explanation or a scientific explanation or it was a pure coincidence?’”
Now just imagine for a minute that you were investigating a UFO appearance, likely caused by a bright satellite in orbit around the earth. You ask the witness: “Really, are you sure? Are you positive there is no other way to explain this?” And he says “No, I’m sure, it was Martians,” and you smile and go home, secure that your job as a skeptic had been carried out and that once more truth, evidence and logic had triumphed. Holy cannoli!
Even Dr. Rubens, Jake’s physician, despite his obvious accomplishments as a surgeon, displayed an incredible amount of faulty logic. He was impressed by the Devil’s Advocate and his assistants: “They took a very hard look at whether this really was something beyond what they described as the wonders of modern medicine.” And how on earth would they know, Doctor?
This is the only time in the story where a real skeptic, CSI’s Joe Nickell, finally makes a brief appearance (we are near the end of the segment, but now we have Fairness and Balance!). Joe gets to say what everybody else ought to have figured out from the beginning: “When the Catholic Church confirms miracles, it's using what is called an argument from ignorance.” Yup, once again, Logical Fallacies 101, which apparently they don’t teach in journalism school.
But Hagerty couldn’t possibly leave it at that, it wouldn’t make for a good religion-friendly story. So she ends instead by again quoting Dr. Rubens: “I can't explain why he would survive over someone else,” and of course the boy’s mother: “It would be disappointing if [Kateri] didn't get to be a saint.” Well, that will be for the Pope to decide. He has a solid background in these matters, considering that before becoming Pope Benedict XVI, Joseph Aloisius Ratzinger was the Prefect of the Vatican’s Sacred Congregation for the Doctrine of the Faith — what used to be known as the Inquisition.

Thursday, April 21, 2011

Massimo's Picks

by Massimo Pigliucci
* The definition of insanity: the US accounts for 43% of the entire world's "defense" budget. Defense from what?
* Old but good: Fodor may be right, Clark and Chalmers need to take Phil Mind 101.
* Academic journal caves in to Intelligent Design lobby. Boycott campaign started.
* The Philadelphia Inquirer quotes yours truly on Intelligent Design and other nonsense on stilts.
* Yet another damning review of The Moral Landscape. Why do some people take Sam Harris so seriously, one wonders?
* Atlas Shrugged: a bad novel, now a bad movie (not to mention a horrible philosophy).
* More on Rand: her early hero was a child murderer, whom she admired because for him others did not exist...
* What exactly is philosophy of science? Here's one take.

Wednesday, April 20, 2011

Michael’s Picks

by Michael De Dora
* For the last three years, Gallup has called 1,000 randomly selected American adults each day and asked them about several different indicators of their quality of life. Responses were then converted to the Gallup-Healthways Well-Being Index. Here are the results, visually displayed on a map of the U.S. 
* In the New York Times, Timothy Egan defends President Obama’s so-called “reflective dithering.” 
* Dahlia Lithwick argues that Justice Clarence Thomas just issued “one of the cruelest Supreme Court decisions ever.”
* Scientists claim that chickens have empathy because mother chickens reportedly show distress when their offspring experience pain in their presence.
* A new study suggests religious people are not happy because of their religion, but because of their social network.
* James Warren discusses research that explores why experts get it wrong. 
* Are federal employees overpaid? Republicans say yes, but The Associated Press finds that doesn’t necessarily hold up.  
* I’m not sure how this essay on the federal budget got published on FOX News’ opinion page, but I’m not complaining.

Monday, April 18, 2011

Only the platform: Patricia Churchland’s science of morality

by Lena Groeger
A few weeks ago I went to a talk by philosopher-turned-neuroscientist Patricia Churchland about her new book Braintrust. The talk began with the moderator turning to a packed audience in Columbia’s Havemeyer Hall and asking quite pointedly: “With a show of hands, can science tell us right from wrong?”
Only about four hands go up.
“All right,” he said, beckoning Churchland to the stage, “let’s see what you all think afterwards.”
Presumably Churchland was about to change a few hundred minds on the science of morality. But as she proceeded through her lecture, it became increasingly clear that even she wouldn’t answer the moderator’s question wholeheartedly in the affirmative. She was providing the “yes” to another question, something more like “Can science tell us about right and wrong?” While the question is slightly less interesting (because it seems so obvious) her answer is fascinating.
It all begins with me. Ok, not me, but the self. Each one of us is equipped with a neural circuitry that ensures our own self-caring and well-being — values in the most fundamental sense. As Churchland likes to say “we’re all born with systems that are very deep in the values business.” Neurons in the brainstem and hypothalamus monitor the inner state of our bodies to keep us alive; they also cause us to run from predators or eat when we’re hungry. Without these life-relevant feelings we wouldn't survive very long, let alone reproduce.
The next step is to move from self-caring to other-caring. In mammals, this shift occurs not by some radical new engineering plan, but by slight adjustments to the neural mechanisms that are already in place. Modifications to the emotional, endocrine, stress and reward/punishment systems motivate new values, namely, the well-being of certain others. It’s as if the “golden circle of me” expands to include offspring, mates, friends and eventually even strangers.
At the heart of all these modifications and changes to the brain is a relatively simple hormone called oxytocin. Oxytocin is thought to play an important role in mammalian bonding, evoking feelings of contentment and trust, reducing defensive behaviors like fleeing or fighting, and increasing the sense of calmness and security. Churchland describes the importance of oxytocin by telling her favorite story of all time — it involves voles.
Actually, two types of voles: prairie voles and montane voles. Prairie voles bond for life; montane voles are promiscuous. Male prairie voles protect their pups from harm, provide them with food, and fight off other males. Male montane voles take no role in guarding the nest, the female, or the pups. Scatter them across a room, prairie voles will collect back together in a huddle. Montane voles are content to be left alone.
What makes these furry little rodents behave so differently? In the 1970’s neuroscientist Sue Carter decided to look for the answer in the brain. She found that in a very specific place, the density for oxytocin receptors was much higher in prairie voles than in montane voles. Subsequent studies have shown that blocking the receptors for oxytocin in prairie voles changes their social behavior dramatically, and they no longer bond with their mates. For Churchland, this story was the clue that oxytocin was the neural mechanism for attachment, or what “Hume might accept as the germ of the moral sentiment.”
So if oxytocin is something like the building block of morality, then attachment and trust are the platform. These basic dispositions — to extend care to others, to want to belong, to be distressed by separation — constitute the motivation for animals to find solutions to social problems. They shape what Churchland dubs “the moral problem space,” so that only certain kinds of problems arise and only certain kinds of solutions are ever really entertained.
Churchland is adamant in pointing out that this neural platform for morality is only the platform. The rest of the scaffolding — the culture in which these brains live — is still very much being worked out. As she writes in her book: “There is no simple set of steps to take us from ‘I care, I value’ to the best solution to specific moral problems, especially those problems that arise within complex cultures. It’s a messy practical business.”
Which leads me to two final points (they actually came up in the Q&A after the main talk, but I think they are absolutely central to her thesis). They have to do with what Churchland does not claim, and can be seen as direct replies to other voices in the science of morality discussion.
First, in response to a Sam Harris-esque type of “science can give us answers to moral questions because values are a kind of fact,” Churchland explicitly states that neuroscience cannot answer questions such as “When can organs be donated?” or “When is an inheritance tax fair?” or “When is a war a just war?” On questions of this sort, neuroscience has nothing to say directly. To get some answers, we have to talk to each other, consider other points of view, see what kinds of consequences follow what sorts of actions, etc. Morality is a practical problem-solving endeavor, and to solve problems we must balance various considerations against each other to produce suitable — albeit not perfect — solutions.
Second, in response to a tendency to attribute universal moral intuitions to an innate moral sense or biological foundation (à la Marc Hauser or Jonathan Haidt), Churchland warns us to proceed with caution. Just because something is very common it doesn’t mean it’s innate. Making boats out of wood is common in all sorts of cultures across the world — does its universality make it fundamental to our biology? Of course not, people make boats out of wood because it floats, is available in many places, and is pretty easy to work with; it’s a good solution to a common problem. Moral problems may be solved in a similar fashion, without the necessity for an innate grammar or specific moral foundation.
If you want a much more detailed (and slightly more coherent) explanation of Churchland’s thoughts on the matter, I would definitely take a look at Braintrust. It makes abundantly clear just how much science has to say about the roots of morality, but also, just how much more work needs to be done on that scaffolding.

Friday, April 15, 2011

Evolution as pseudoscience?

by Massimo Pigliucci
Don't worry, despite the title of this post, I have not suddenly gone creationist. Rather, I have been intrigued by an essay by my colleague Michael Ruse, entitled “Evolution and the idea of social progress,” published in a collection that I am reviewing, Biology and Ideology from Descartes to Dawkins (gotta love the title!), edited by Denis Alexander and Ronald Numbers.
Ruse, who has arguably written more than anyone else on the planet about the history and philosophy of Darwinism, is also contributing to a large collection of new essays on the demarcation problem (the distinction between science and pseudoscience), that Maarten Boudry and I are putting together for the University of Chicago Press (provisional title: The Philosophy of Pseudoscience: Reconsidering the Demarcation Problem — stay tuned).
Ruse's essay in the Alexander-Numbers collection questions the received story about the early evolution of evolutionary theory, which sees the stuff that immediately preceded Darwin — from Lamarck to Erasmus Darwin — as protoscience, the immature version of the full fledged science that biology became after Chuck's publication of the Origin of Species. Instead, Ruse thinks that pre-Darwinian evolutionists really engaged in pseudoscience, and that it took a very conscious and precise effort on Darwin’s part to sweep away all the garbage and establish a discipline with empirical and theoretical content analogous to that of the chemistry and physics of the time.
Ruse asserts that many serious intellectuals of the late 18th and early 19th century actually thought of evolution as pseudoscience, and he is careful to point out that the term “pseudoscience” had been used at least since 1843 (by the physiologist Francois Magendie), while the concept was prominently on display during the historical investigation of mesmerism ordered in 1784 by King Louis XVI of France and jointly carried out by Antoine Lavoisier and Benjamin Franklin.
Ruse’s somewhat surprising yet intriguing claim is that “before Charles Darwin, evolution was an epiphenomenon of the ideology of [social] progress, a pseudoscience and seen as such. Liked by some for that very reason, despised by others for that very reason.”
Indeed, the link between evolution and the idea of human social-cultural progress was very strong before Darwin, and was one of the main things Darwin got rid of. The encyclopedist Denis Diderot was typical in this respect: “The Tahitian is at a primary stage in the development of the world, the European is at its old age. The interval separating us is greater than that between the new-born child and the decrepit old man.” Similar nonsensical views can be found in Lamarck, Erasmus, and Chambers, the anonymous author of The Vestiges of the Natural History of Creation, usually considered the last protoscientific book on evolution to precede the Origin.
On the other side of the divide were social conservatives like the great anatomist George Cuvier, who rejected the idea of evolution — according to Ruse — not as much on scientific grounds as on political and ideological ones. Indeed, books like Erasmus’ Zoonomia and Chambers’ Vestiges were simply not much better than pseudoscientific treatises on, say, alchemy before the advent of modern chemistry.
One of Ruse’s interesting points is that people were well aware of this sorry situation, so much so that astronomer John Herschel referred to the question of the history of life as “the mystery of mysteries,” a phrase consciously adopted by Darwin in the Origin. Darwin set out to solve that mystery under the influence of three great thinkers: Newton, the above mentioned Herschel, and the philosopher William Whewell (whom Darwin knew and assiduously frequented in his youth). Let us take them briefly one by one to see exactly what Ruse means.
Darwin was a graduate of the University of Cambridge, which had also been Newton’s home. Chuck got drilled early on during his Cambridge education with the idea that good science is about finding mechanisms (vera causa), something like the idea of gravitational attraction underpinning Newtonian mechanics. He reflected that all the talk of evolution up to then — including his grandfather’s — was empty, without a mechanism that could turn the idea into a scientific research program.
The second important influence was Herschel’s Preliminary Discourse on the Study of Natural Philosophy, published in 1831 and read by Darwin shortly thereafter, in which Herschel sets out to give his own take on what today we would call the demarcation problem, i.e. what methodology is distinctive of good science. One of Herschel’s points was to stress the usefulness of analogical reasoning (more on this in a moment).
Finally, and perhaps most crucially, Darwin also read (twice!) Whewell’s History of the Inductive Sciences, which appeared in 1837. In it, Whewell sets out his notion that good scientific inductive reasoning proceeds by a consilience of ideas, a situation in which multiple independent lines of evidence point to the same conclusion.
Here is the cool thing: the first part of the Origin, where Darwin introduces the concept of natural selection by way of analogy with artificial selection can be read as the result of Herschel’s influence (natural selection is the vera causa of evolution), while the second part of the book, constituting Darwin's famous “long argument,” applies Whewell’s method of consilience by bringing in evidence from a number of disparate fields, from embryology to paleontology to biogeography.
What, then, happened to the strict coupling of the ideas of social and biological progress that had preceded Darwin? While he still believed in the former, the latter was no longer an integral part of evolution, because natural selection makes things “better” only in a relative fashion. There is no meaningful sense in which, say, a large brain is better than very fast legs or sharp claws, as long as you still manage to have dinner and avoid being dinner by the end of the day (or, more precisely, by the time you reproduce).
Ruse’s claim that evolution transitioned not from protoscience to science, but from pseudoscience, makes sense to me given the historical and philosophical developments. It wasn’t the first time either. Just think about the already mentioned shift from alchemy to chemistry. Of course, the distinction between pseudoscience and protoscience is itself fuzzy, but we do have what I think are clear examples of the latter that cannot reasonably be confused with the former, SETI for one, and arguably Ptolemaic astronomy. We also have pretty obvious instances of pseudoscience (the usual suspects: astrology, ufology, etc.), so the distinction — as long as it is not stretched beyond usefulness — is interesting and defensible.
It is amusing to speculate which, if any, of the modern pseudosciences (cryonics, singularitarianism) might turn out to be able to transition in one form or another to actual sciences. To do so, they may need to find their philosophically and scientifically savvy Darwin, and a likely bet — if history teaches us anything — is that, should they succeed in this transition, their mature form will look as different from the original as chemistry and alchemy. Or as Darwinism and pre-Darwinian evolutionism.

Wednesday, April 13, 2011

Michael’s Picks

by Michael De Dora
* The Supreme Court rules unanimously that Corporations have no right to personal privacy when it comes to government records requested under the Freedom of Information Act.
* Robert Gagnon, responding to Jennifer Wright Knust, says the Bible really does condemn homosexuality
* Eugenie Scott, Lawrence Krauss, Frans de Waal and more answer the question, “What should Obama and Congress do for science?”
* Why are humans so interested in watching video footage of horrific natural disasters? 
* Another classic from The Onion: local skeptic pitied for his “tragic reluctance to embrace the unverifiable.”
* An absolutely incredible video constructed from photographs taken by the Cassini spacecraft as it approached Saturn. 
* Joshua Knobe argues that evidence suggests humans practice morality in a relative way. Which, of course, doesn’t suggest it ought to be that way.  
* Farhad Manjoo reviews a couple of smartphone applications that could help you speak better in public.

Monday, April 11, 2011

Ray Kurzweil and the Singularity: visionary genius or pseudoscientific crank?

by Massimo Pigliucci
Everyone keeps telling me that Ray Kurzweil is a genius. Kurweil has certainly racked up a number of impressive accomplishments. Kurzweil Computer Products produced the first optical character recognition system (developed by designer-engineer Richard Brown) in the mid-‘70s, and another of his companies, Kurzweil Music Systems, put out a music synthesizer in 1984. A few years later Kurzweil Applied Intelligence (the guy really likes to see his name in print) designed a computerized speech recognition system further developed by Kurzweil Educational Systems (see what I mean?) for assistance to the disabled. Other ventures include Kurzweil's Cybernetic Poet, Kurzweil Adaptive Technologies, and Kurzweil National Federation of the Blind Reader. In short, the man has a good sense of business and self-promotion — and there is nothing wrong with either (within limits).
However, the reason I’m writing about him is because of his more recent, and far more widely publicized, role as a futurist, and in particular as a major mover behind the Singularitarian movement, a strange creature that has been hailed as both a visionary view of the future and as a secular religion. Another major supporter of Singularitarianism is philosopher David Chalmers, by whom I am equally underwhelmed, and whom I will take on directly in the near future in the more rarefied realm of academic publications.
Back to Kurzweil. I’m not the only one to have significant problems with him (and with the idea of futurism in general, particularly considering the uncanny ability of futurists to get things spectacularly and consistently wrong). John Rennie, for instance, published a scathing analysis of Kurzweil’s much taunted “prophecies,” debunking the notion and showing that the guy has alternatively being wrong or trivial about his earlier visions of the future.
(Here is a taste of what Rennie documented, from a “prediction” Kurzweil made in 2005: “By 2010 computers will disappear. They'll be so small, they'll be embedded in our clothing, in our environment. Images will be written directly to our retina, providing full-immersion virtual reality, augmented real reality. We'll be interacting with virtual personalities.” Oops. Apparently the guy doesn’t know that one should never, ever make predictions that might turn out to be incorrect in one’s own lifetime. Even the Seventh Day Adventists have learned that lesson!)
It is pretty much impossible to take on the full Kurzweil literary production, the guy writes faster than even I can, and I fully expect a book length rebuttal to this post, if he comes across it in cyberspace (the man leaves no rebuttal unrebutted). Instead, I will focus on a single detailed essay he wrote entitled “Superintelligence and Singularity,” which was originally published as chapter 1 of his The Singularity is Near (Viking 2005), and has been reprinted in an otherwise insightful collection edited by Susan Schneider, Science Fiction and Philosophy.
In the essay in question, Kurzweil begins by telling us that he gradually became aware of the coming Singularity, in a process that, somewhat peculiarly, he describes as a “progressive awakening” — a phrase with decidedly religious overtones. He defines the Singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Well, by that definition, we have been through several “singularities” already, as technology has often rapidly and irreversibly transformed our lives.
The major piece of evidence for Singularitarianism is what “I [Kurzweil] have called the law of accelerating returns (the inherent acceleration of the rate of evolution, with technological evolution as a continuation of biological evolution).” He continues by acknowledging that he understands how so many people don’t get it, because, you see, “after all, it took me forty years to be able to see what was right in front of me.” Thank goodness he didn’t call it the Kurzweil law of accelerating returns, though I’m sure the temptation was strong.
Irritating pomposity aside, the first obvious serious objection is that technological “evolution” is in no logical way a continuation of biological evolution — the word “evolution” here being applied with completely different meanings. And besides, there is no scientifically sensible way in which biological evolution has been accelerating over the several billion years of its operation on our planet. So much for scientific accuracy and logical consistency.
Kurzweil proceeds with a simple lesson meant to impress on us the real power of exponential growth, which he claims characterizes technological “evolution.” If you check out the original essay, however, you will notice that all of the diagrams he presents to make his case (Figs. 1-6) are simply made up by Kurzweil, either because they do not show any actual data (Fig. 1) or because they are an arbitrary assemblage of “canonical milestones” lined up so to show that there has been progress in the history of the universe (Figs. 2-6).
For instance, in Fig. 6 we have a single temporal line where “Milky Way,” “first flowering plants,” “differentiation of human DNA type” (whatever that means), and “rock art” are nicely lined up to fill the gaps between the origin of life on earth and the invention of the transistor. In Figs. 3 and 4, a “countdown to Singularity” is illustrated by a patchwork of evolutionary and cultural events, from the origin of reptiles to the invention of art, again to give the impression that — what? There was a PLAN? That the Singularity was inherent in the very fabric of the universe?
Now, here is a bit that will give you an idea of why some people think of Singularitarianism as a secular religion: “The Singularity will allow us to transcend [the] limitations of our biological bodies and brains. We will gain power over our fates. Our mortality will be in our own hands. We will be able to live as long as we want.” Indeed, Fig. 2 of that essay shows a progression through (again, entirely arbitrary) six “epochs,” with the next one (#5) occurring when there will be a merger between technological and human intelligence (somehow, a good thing), and the last one (#6) labeled as nothing less than “the universe wakes up” — a nonsensical outcome further described as “patterns of matter and energy in the universe becom[ing] saturated with intelligence processes and knowledge.” This isn’t just science fiction, it is bad science fiction.
There are several unintentionally delightfully ironic sentences scattered throughout Kurzweil’s essay: “The future is widely misunderstood ... The future will be far more surprising than most people realize,” etc. Of course, “most” people doesn’t include our futurist genius, despite the fact that he has already been wrong about the future, and spectacularly so.
And then there is pure nonsense on stilts: “we are doubling the paradigm-shift rate every decade.” What does that even mean? Paradigm shifts in philosophy of science (a la Thomas Kuhn) are a fairly well understood — if controversial — concept. But outside of it, the phrase has simply come to mean any major change arbitrarily defined. Again, not the stuff of rigorous analysis, and far less of serious predictions.
And there is more, much more: “a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process.” First, it is highly questionable that one can even measure “technological change” on a coherent uniform scale. Yes, we can plot the rate of, say, increase in microprocessor speed, but that is but one aspect of “technological change.” As for the idea that any evolutionary process features exponential growth, I don’t know where Kurzweil got it, but it is simply wrong, for one thing because biological evolution does not have any such feature — as any student of Biology 101 ought to know.
Kurzweil’s ignorance of evolution is manifested again a bit later, when he claims — without argument, as usual — that “Evolution is a process of creating patterns of increasing order. ... It’s the evolution of patterns that constitutes the ultimate story of the world. ... Each stage or epoch uses the information-processing methods of the previous epoch to create the next.” I swear, I was fully expecting a scholarly reference to Deepak Chopra at the end of that sentence. Again, “evolution” is a highly heterogeneous term that picks completely different concepts, such as cosmic “evolution” (actually just change over time), biological evolution (which does have to do with the creation of order, but not in Kurzweil’s blatantly teleological sense), and technological “evolution” (which is certainly yet another type of beast altogether, since it requires intelligent design). And what on earth does it mean that each epoch uses the “methods” of the previous one to “create” the next one? Techno-mystical babble this is.
In his description of the progression between the six epochs, Kurzweil dances a bit too close to the infamous anthropic principle, when he says “The rules of our universe and the balance of the physical constants ... are so exquisitely, delicately and exactly appropriate ... that one wonders how such an extraordinary unlikely situation came about.” Can you say Intelligent Design, Ray? This of course had to follow a paragraph including the following sentence: “we do know that atomic structures store and represent discrete information.” Well, only if one adopts such a general definition of “information” that the word entirely loses meaning. Unless of course one has to force the incredibly chaotic and contingent history of the universe in six nicely lined up epochs that start with “Physics and Chemistry: information in atomic structures.”
The jump from epoch 2 (biology and DNA) to 3 (brains) is an almost comical reincarnation of the old scala naturae, the great chain of being that ascended from minute particles and minerals (Kurzweil’s physics and chemistry age) to plants (epoch 2), animals (epoch 3), humans (epoch 4) and... Well, that’s where things diverge, of course. Instead of angels and god we have, respectively, human-computer hybrids and the Singularity. The parallels are so obvious that I can’t understand why it took me forty years to see them (it didn’t really, it all came to me in a rapid flash of awakening).
Where does Kurzweil get his hard data for the various diagrams purportedly showing this cosmic progression through his new scala naturae? Fortunately for later scholars, he tells us: the Encyclopedia Britannica, the American Museum of Natural History (presumably one of those posters about the history of the universe they sell in their gift shop), and Carl Sagan’s cosmic calendar (which Sagan used as a metaphor to convey a sense of the passing of cosmic time in his popular book, The Dragons of Eden). I bow to the depth of Kurzweil’s scholarship.
And finally, we get to stage 6, when the universe “wakes up.” How is this going to happen? Easy: “[the universe] will achieve this by reorganizing matter and energy to provide an optimal level of computation to spread out from its origin on Earth.” Besides the obvious objection that there is no scientific substance at all to phrases like the universe reorganizing matter and energy (I mean, the universe is matter and energy), what on earth could one possibly mean by “optimal level of computation”? Optimal for whom? To what end? Oh, and for this to happen, Kurzweil at least realizes, “information” would have to somehow overcome the limit imposed by the General Theory of Relativity on how fast anything can travel, i.e. the speed of light. Kurzweil here allows himself a bit of restraint: “Circumventing this limit has to be regarded as highly speculative.” No, dude, it aint’ just speculative, it would amount to a major violation of a law of nature. You know, the sort of thing David Hume labeled “miracles.” (channel Sagan’s version of Hume’s dictum: Extraordinary claims require extraordinary evidence.)
Would you like (another) taste of just how “speculative” Kurzweil can get? I’m glad you asked: “When scientists become a million times more intelligent and operate a million times faster, an hour would result in a century of progress (in today’s terms) ... Ultimately, the entire universe will become saturated with our intelligence. This is the destiny of the universe.” Oh? The universe has a destiny? And, pray, who laid that out?
At this point I think the reader who has been patient enough with me will have gotten a better idea of why I think Kurzweil is a crank (that and the fact that his latest book, Transcend: Nine Steps to Living Well Forever is co-authored with Terry Grossman, who is a proponent of homeopathic cures — nobody told Ray that homeopathy is quackery?).
Allow me, however, to conclude with what may well turn out to be a knock down argument against the Singularity. As we have seen, the whole idea is that human beings will merge with machines during the ongoing process of ever accelerating evolution, an event that will eventually lead to the universe awakening to itself, or something like that. Now here is the crucial question: how come this has not happened already?
To appreciate the power of this argument you may want to refresh your memory about the Fermi Paradox, a serious (though in that case, not a knockdown) argument against the possibility of extraterrestrial intelligent life. The story goes that physicist Enrico Fermi (the inventor of the first nuclear reactor) was having lunch with some colleagues, back in 1950. His companions were waxing poetic about the possibility, indeed the high likelihood, that the galaxy is teeming with intelligent life forms. To which Fermi asked something along the lines of: “Well, where are they, then?”
The idea is that even under very pessimistic (i.e., very un-Kurzweil like) expectations about how quickly an intelligent civilization would spread across the galaxy (without even violating the speed of light limit!), and given the mind boggling length of time the galaxy has already existed, it becomes difficult (though, again, not impossible) to explain why we haven’t seen the darn aliens yet.
Now, translate that to Kurzweil’s much more optimistic predictions about the Singularity (which allegedly will occur around 2045, conveniently just a bit after Kurzweil’s expected demise, given that he is 63 at the time of this writing). Considering that there is no particular reason to think that planet earth, or the human species, has to be the one destined to trigger the big event, why is it that the universe hasn’t already “awakened” as a result of a Singularity occurring somewhere else at some other time? Call that the, oh, I don’t know, “Pigliucci paradox.” It has a nice ring to it.

Sunday, April 10, 2011

Massimo’s Picks

by Massimo Pigliucci
* Philosopher Simon Blackburn on morality without gods.
* The different brains of conservatives and liberals.
* Prejudice evolved in pre-human primates.
* Republicans don't want to fund NPR, but it's okay to give half a billion federal money to Evangelical Liberty University.
* Morality pills, anyone?
* FermiLab makes huge new discovery before shutting down... Maybe.
* The Plutocracy's next target: medical care for the elderly and poor. Way to go America.
* Here is how the Republican appointed Supreme Court keeps screwing the rest of us. Best plutocracy money can keep in power.
* America: of the 1%, by the 1%, for the 1%. Plutocracy rules.
* Francis Fukuyama's first volume on the origins of political order and "the bad emperor problem."
* The role of philosophy in shaping the ruling classes. 2.5 millennia after Plato.
* Students should check their sense of entitlement at the door...
* Just for fun: Ayn Rand is really Dr. Doom.