About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Friday, February 28, 2014

Massimo's weekend picks!

* Facebook now allows users to pick among 50 genders! (But why do users need to pick any??)

* Very good reasons why atheists should not call religious people "mentally ill."

* A philosophical-quantitative approach to decide what to do with your life.

* Whole Foods: America's temple of pseudoscience? (Full disclosure: I shop there...)

* The inanity of "stand your ground" laws, and why you can't invoke John Locke to defend them!

* At least some invertebrates feel pain (though others very likely don't).

* Why is academic writing so, ahem, academic?

* Philosophy should hit the road, just like in ancient Greek times.

* Texting while walking bad for your health, and not (only) for the obvious reasons.

Wednesday, February 26, 2014

(Psychological) Gravity’s a Bitch: On Addiction and Phillip Seymour Hoffman

by Steve Neumann

You are a comet. You were formed by material and processes in the deeps of time, hurled from your home star system out into the wider universe. You’re able to travel for long stretches through vast swathes of space relatively unencumbered; but as you approach certain sufficiently large celestial bodies, you feel the drag of their gravitational pull. Sometimes you get pulled in so close you can never break free from their influence, and are forever caught in their orbit. There’s even a chance you could perish altogether. 

These bodies are your weak spots — maybe even your blind spots — those areas in your life that cause you a good deal of what we normally consider an excessive amount of anxiety, stress and pain. You may see these bodies looming on the distant horizon, or you may never see them coming, realizing you’re under their control only after you’re already firmly in their grip. 

Gravity’s a bitch — psychological gravity, that is. And just like the gravity of physics, this type of gravity is pernicious, in that the closer you get to its field of influence, the harder it is to escape. But people can and do escape. Why is it that some people can, while others can’t? This question is as much philosophical as it is psychological, and deals with the always fun topic of freedom of the will. 

Poor Phillip Seymour Hoffman. It’s tough to see anyone succumb to drug addiction, even anonymous, complete strangers; but I always seem to get an extra pang of loss when that person is some type of exceptional talent, maybe because talent is so rare, and there’s a fear that it might not appear again. But that feeling usually subsides after a few minutes, because I realize again and again that life is a profligate spender. Clearly heroin was Hoffman’s greatest gravitational weakness. The Hoffman-comet got stuck in its orbit and eventually disintegrated, after flaunting its radiance across our skies for years. Almost immediately upon hearing the news that Hoffman died of a drug overdose, people generally fell into two camps on the matter: one, that addiction is a disease and he succumbed to it as if it were cancer; and two, he did it to himself and therefore has only himself to blame.

Is one of these conclusions correct, to the exclusion of the other? Or is there a middle ground that lays blame on both — or neither? I don’t now remember where I first found it, but I came across a blog post from someone named Debbie Bayer who has “worked for 9 years as a psychotherapist in facilities treating addiction, mood disorders and eating disorders,” and who has “over 25 years experience working with 12 step communities.” The title of the post is “Phillip Seymour Hoffman did not have choice or free will and neither do you.” Coming from an expert in addiction, that would seem to settle the issue. Except that it doesn’t. It is, however, a clear cut example of the first opinion I mentioned above, and it may also be the prevalent one. 

In talking about the few sorry souls out of the vast majority of us who haven’t succumbed to addiction, Bayer contends that their brains simply don’t respond in the same way that a hopeless addict’s brain does. This is undoubtedly true — and a tautology. Of course their brains responded differently; otherwise they wouldn’t have yielded to the narcotic temptation in the first place. But the stronger claim about addiction is that an addict is hardwired or genetically predisposed to it, with the implication that they are fated to be addicts, and nothing they do can commute that life sentence. Their comet-trajectory is fixed, and it’s just a matter of time before they fall headlong into a star of destruction, and their feathery ice-flame is forever extinguished.

But is this really true? Is it the case that someone born with a predisposition to addiction will inevitably become an addict, and likely die from it? Yes, gravity’s a bitch, but even comets get knocked out of their orbits every now and then. The universe is in motion — stars explode and die, jettisoning vast amounts of material into their environs; other stars are born and grow, greedily accumulating ambient material; other celestial bodies collide and spread debris in all directions. Space is awash in detritus. 

Likewise, our friends and family die, and we feel the jolting psychic reverberations of these events; other friends and family are born or otherwise enter our lives, providing opportunities to alter our trajectories; and strangers collide, for good or ill, and the results of these collisions can send us careening far and wide. In other words, there are ample opportunities for life to change our direction. 

But it could be argued that, even though we are constantly buffeted by events, by chance and circumstance, we still have to be cognizant enough to exploit them to our advantage. If I’m fated to be an addict, and to die at the hands of the dragon I’ve been chasing, then it doesn’t matter what life throws at me, right? If my best friend tragically dies from a heroin overdose, what is that to me? If my partner gives birth to a beautifully delicate little girl, what do I care whether or not I’m around to see her grow up and have a family of her own? If a shady dealer holds a knife to my throat or a gun to my head and robs me of all my money, what do I care if I have to steal in order to get my next fix? 

I’m sure many people cringe at the thought of these scenarios, but nevertheless they continue to believe that the addict has no choice in the matter. But if we take a look at what Bayer considers to be some of the mechanisms involved in the etiology of an addict’s fix, we might find some room for choice. She says that when withdrawal symptoms (e.g., physical distress, anxiety caused by emotional stress, etc.) reach a certain critical mass in the brain, then “the brain automatically cuts off the access to the frontal lobes (in a manner of speaking) and begins to direct the body to rebalance the stress, to find equilibrium.” But what happens before this point of no return is reached? Aren’t there opportunities for the trajectory of the addict’s comet to be redirected? Just because the addict is experiencing those negative emotions doesn’t mean that he must feel them, or at least that he must continue to feel them — why can’t those feelings change before it’s too late, before the addict texts his dealer? 

There is some research that shows that bad moods and good moods can lead to preferences for different kinds of foods. An example from the research shows that, “if given the choice between grapes or chocolate candies, someone in a good mood may be more inclined to choose the former while someone in a bad mood may be more likely to choose the latter.” Personal experience seems to bear this out. I’m usually stressed out by the end of the week, but instead of making a rejuvenating fruit smoothie packed with vitamins and minerals, I’ll grab my glencairn glass and fill it with a dram of bourbon, preferably Mr. Hayden’s amber restorative. [1] Surely the same forces are in play when it comes to a narcotic like heroin. [2]

So the crux of the research is that “individuals in a positive mood, compared to control group participants in a relatively neutral mood, evaluated healthy foods more favorably than indulgent foods,” and that “individuals in positive moods who make healthier food choices are often thinking more about future health benefits than those in negative moods, who focus more on the immediate taste and sensory experience.” As a result, the researchers recommend what they call “mood repair motivation,” or getting the individual to focus their attention on more harmless ways to alter their mood. They suggest talking to friends or listening to music as mood boosters. 

Engaging with friends, listening to music — these and ten thousand other activities are the forces that present themselves as raw materials for us to exploit to our advantage. But it’s up to each individual to come up with the right recipe that will generate the desired changes to his trajectory. An addict’s choices are just as productive as the unchosen forces that have shaped him hitherto. And even though some of his key choices thus far have been determined by his predisposition to addiction, there still remains available to him the capacity to choose differently the next time. [3]

Ah, but that’s the rub, isn’t it? How does the addict go about making choices that will change the orbit of his suicidal comet? In a word, influence. He has to be able to be influenced by people and events. It’s certainly not easy, even for someone “addicted” to chocolate, much less heroin. But it’s possible. Psychological gravity’s a bitch, and it’s not going away; but, just like with the physical world where we can achieve escape velocity of our planet’s gravitational pull, we can achieve escape velocity from the gravitational fields that populate our psychology. It takes conscious effort on the part of the addict and, yes, some luck; but even the smallest effort may have substantial repercussions, strong enough to jostle him onto a different path. 

In her post about Hoffman, Bayer says that it’s “time for all of us who got through unscathed to stop patting ourselves on the back for our genetic good luck, and it is time to stop judging those who were not born with the same good genes as defective.” I couldn’t agree more. Our American culture needs more of this kind of sensibility, which jibes nicely with the consequences that can be derived from Worldview Naturalism. So I would just add one thing: knowing what we know about the power of genetics and the causal web in which each of us is ensconced, those with better genetic good luck should make an extra effort to share responsibility with those who are struggling with the gravity of their hazardous situations. When we see their comet getting caught in a dangerous gravitational field, we should offer our best help and not fall victim ourselves to the fallacy of fatalism, the idea that no matter what we do the outcome will be the same. 

We can find strength and maybe even solace in the knowledge that, even though the future is fixed, we don’t know what that future will be, and only in the unfolding of our own choices does the future take shape. So it behooves us to make the best choices we can, for ourselves and others.

———

[1] My nod to Christopher Hitchens.

[2] Yes, this “surely” marks a weak spot in my argument — release the Hounds of Dennett!

[3] When I say that the addict can “choose differently,” I don’t mean to say that he can choose to do other than he did in the exact same circumstances. That’s why I added the “next time.”

Monday, February 24, 2014

Rationally Speaking podcast: Zach Weinersmith on His "SMBC" Webcomic

This episode features special guest Zach Weinersmith, author of "Saturday Morning Breakfast Cereal," a popular webcomic about philosophy and science.

Zach clarifies his position in the ongoing "philosophy vs. science" fights, poses a question to Julia and Massimo about the ethics of offensive jokes, and discusses BAHFest, his "Bad Ad Hoc Hypotheses" conference lampooning evolutionary psychology, not to mention his movie, "Starpocalype."

Somehow along the way, the three take a detour into discussing an unusual sexual act.

Zach's pick: "Solaris" by Stanislaw Lem.

Friday, February 21, 2014

Massimo's weekend picks!

* The war on reason, by Paul Bloom - a piece that tries to put all the science-based skepticism about humans as reasoning creatures into a, ahem, reasonable perspective.

* Regret is the perfect emotion for our self-absorbed times, writes Judith Shulevitz in the New Republic.

* Newspapers are still the most important medium for understanding the world, says Peter Wilby in New Statesman.

* Perhaps we shouldn't insist on complete consistency for our moral beliefs, suggests Emrys Westacott at 3QuarksDaily.

* We should cultivate the ability to disregard things we can't do anything about, according to Christy Wampole in the New York Times.

Pursuits of Wisdom: Six Ways of Life in Ancient Philosophy from Socrates to Plotinus, a book review by Rachana Kamtekar in the Notre Dame Philosophical Reviews.

* Joseph Stromberg (in the Smithsonian) arrives at a list of just five vitamins and supplements that are actually worth taking.

String Theory and the Scientific Method, another review in the NDPR, by Nick Huggett.

* Forget about quantifying your self, says Josh Cohen in Prospect Magazine, and live your life instead.

Scientific Pride and Prejudice, by Michael Suk-Young Chwe in the New York Times.

Tuesday, February 18, 2014

The missing shade of blue

by Massimo Pigliucci

This semester I’m teaching a graduate level course on “Hume Then and Now,” which aims at exploring some of the original writings by David Hume, particularly the Enquiry Concerning Human Understanding, and contemporary philosophical treatments of Humean themes, such as induction, epistemic justification, and causality.

I want to talk here about a particular episode belonging to Section 2 of the Enquiry, where Hume introduces the famous problem of the missing shade of blue, which is still discussed today in philosophy of mind. I think reflection on the problem itself, as well as some attempts to reconcile what appears to be a glaring contradiction in Hume’s own treatment of it, tells us something interesting about how philosophy is done, and sometimes overdone.

To set the stage, let me tell you a bit about the broader Humean project first. Hume proposed nothing less than an overhaul of the way we do philosophy, largely in reaction to what he (correctly, in my mind) perceived as the useless and obscure musings of “the schoolmen” who preceded him and who were still influential at the onset of the 18th century.

A cardinal point of Hume’s novel approach to philosophy was going to be to conduct, as the title of the book clearly states, an inquiry into how human beings understand things, because only by appreciating human epistemic limits can we produce sound philosophical reasoning. (This approach still inspires plenty of philosophers today, and even a number of scientists, such as social psychologists Jonathan Haidt and Joshua Greene.)

Hume explicitly and very clearly sets out his program for his readers in Section 1 of the Enquiry, appropriately entitled “Of the different species of philosophy.” But it is Section 2, “Of the origin of ideas,” that concerns us here.

It begins with the introduction of Hume’s famous distinction between ideas and impressions. Ideas are thoughts, while impressions are sensations. The first are derived from memory and abstract thinking, the second from the senses. Ideas, Hume argues, are (weaker) “copies” of impressions, and impressions are obtained directly from experience.

For instance, we can feel love for someone (an impression) and we can think about the concept of love (an idea). Clearly, says Hume, the feeling is much stronger than the concept, as expected if it were derivative. Ultimately, according to Hume, all knowledge comes from experience, which is why he is classified among the British empiricists, like Locke (as opposed to the continental rationalists, like Spinoza and Leibniz) — even though Hume actually had a fairly low opinion of Locke, whom he saw as still confused by the influence of the schoolmen.

Hume acknowledges that it would seem that human imagination is boundless, as we can think about all sorts of things that don’t actually exist (and cannot therefore be experienced), such as unicorns and gods. But he then argues that no matter how apparently fanciful our imagination is, all our complex ideas are in fact combinations of simpler ones, and those in turn can be traced to our experience.

Take god, for instance: “The idea of God, as meaning an infinitely intelligent, wise, and good Being, arises from reflecting on the operations of our own mind, and augmenting, without limit, those qualities of goodness and wisdom.”

Hume gives two arguments in support of his thesis: first, whenever we analyze a complex idea — such as that of god — we find that it is, in fact, traceable to a combination of simpler ones, of which we ultimately have direct experience (we have all seen intelligent, wise, and good people). Second, we know that when people have a defect in their sensorial perception they are incapable of forming the corresponding ideas: a blind man has no concept of color, because he has never had an impression of what a color feels like through his senses.

And now we come to the problem caused by the missing shade (for those of you who are following the original text, this is #16 of Section 2 of the Enquiry). I’ll let Hume’s beautifully clear prose speak for itself here (the italics are mine, and they will come in handy during the discussion below):

“There is, however, one contradictory phenomenon, which may prove that it is not absolutely impossible for ideas to arise, independent of their correspondent impressions. ... Suppose, therefore, a person to have enjoyed his sight for thirty years, and to have become perfectly acquainted with colours of all kinds except one particular shade of blue, for instance, which it never has been his fortune to meet with. ... Now I ask, whether it be possible for him, from his own imagination, to supply this deficiency, and raise up to himself the idea of that particular shade, though it had never been conveyed to him by his senses? I believe there are few but will be of opinion that he can: and this may serve as a proof that the simple ideas are not always, in every instance, derived from the correspondent impression; though this instance is so singular, that it is scarcely worth our observing, and does not merit that for it alone we should alter our general maxim.”

Okay, so what’s the big deal, you say? Well, it doesn’t take a sophisticated Hume scholar to figure out that our hero here goes through the following strange sequence: 1. He comes up with what he says is a general principle concerning human understanding; 2. He finds an exception to that principle; 3. He the discards that apparently damning finding as not worthy of consideration. And this despite the fact that Hume had told his readers a bit earlier that the new philosophy he is proposing is subject to empirical disconfirmation, just like the natural philosophy (aka, science) by which it is inspired! What’s going on here? Plenty of commentators have tried to figure it out, attempting to rescue Hume from an embarrassing self-contradiction. After all, this is arguably the most influential philosopher ever to write in the English language. Could it be he didn’t notice that he had successfully refuted his own cardinal doctrine, on which his entire philosophical work is based?

In the following we will look at three of several possible solutions to Hume’s blue dilemma, as summarized in a nice paper by John Nelson (published in 1989 in Hume Studies). I will then argue that there is a good chance that Nelson (and others) over-analyzes things, which is typical of, ahem, analytical philosophy. The answer may be much simpler, more satisfying, and more in synch with Hume’s own conception of “moral” philosophy as analogous to natural philosophy (“moral” at the time indicated all of philosophy other than science, not just ethics). And I will accomplish all of this while at the same time showing just how sensible Hume really was!

The first suggestion advanced by Nelson (rather informally, since he says he overheard it from a colleague…) is that Hume deliberately weakened his empiricist position, giving an opening to the rival rationalist approach through a sort of self-created Trojan horse. Admitting that the missing shade of blue could be conceived a priori, i.e. without recourse to experience, would, in fact, do just that. Now, why would Hume shoot himself in his philosophical foot? Because Hume’s philosophy also includes an important role for instincts (which in fact he discusses right after the section we are concerned with here), and instincts are innate, i.e. they precede direct sense experience. Nelson, however, immediately discards this possibility for the explanation of the missing shade’s problem. If that were really Hume’s intent then he would have constructed his subsequent arguments in the Enquiry in a much more rationalist-friendly fashion, which he most certainly didn’t do.

Option two, then. This one comes from R. Cummins, who proposed it back in 1978 (in philosophy things move slowly, as you know). Essentially, the suggestion is that one can reasonably interpret Hume’s “having an idea of X” (say, the missing shade of blue) as meaning “having a capacity to recognize X,” in which case the apparent contradiction would instantly disappear, since Hume wouldn’t be providing an example that potentially undermines his main thesis, he would simply entertain the possibility that people are capable of recognizing that there is a missing shade of blue among a range of colors offered to them. The problem with this “solution,” as Nelson quickly points out, is that Hume himself is very clear that he considers the missing shade to be a “contradictory phenomenon,” which is entirely inconsistent with Cummins’ way out of the dilemma.

What then? Nelson has his own theory, of course. (Are you still with me? I promise, there will be a pay off, coming up shortly…) This one is subtle and clever. Indeed, I think, too subtle and clever. Nelson essentially suggests that Hume’s bringing up of a possible contradiction to his main thesis about how people form ideas (ultimately, from experience) is in perfect harmony with his even more general thesis that human understanding of “matters of facts” (i.e., everything outside of math and logic) is only probable, never certain. You see the twist? Hume, according to Nelson, is providing a (possible) example of how his own theories about a particular matter of fact — the ultimate origin of human ideas — could be mistaken, which proves his meta-point about there being no such thing as certainty about anything empirical. Very clever, very elegant, and very likely an unnecessary overreach on Nelson’s part.

My humble opinion — since I’m not a Hume scholar — is simply that we need to take Hume at his words. Re-read, if you please, the passages by him that I quoted above, and pay particular attention to the italicized parts: “it is not absolutely impossible,” “simple ideas are not always, in every instance,” “this instance is so singular, that it is scarcely worth our observing,” “does not merit that for it alone we should alter our general maxim.”

Did you see it? Hume simply found a hypothetical example (it would actually be very difficult to do the experiment, if you think about it) that doesn’t go well with his general account. But he thinks that the alleged exception is so contrived as to fail to make a general point, and he therefore (wisely) proceeds to ignore it.

This attitude is similar to the one unreflectively adopted by practicing scientists, and philosophers in general — especially those of the analytic tradition — could benefit from imitating Hume more often. Too many philosophers seem to think that when they find an apparent exception to a general concept, no matter how unlikely or artificial, they have “defeated” the general notion, that exotic counter-examples provide knock-out arguments against a given thesis. But in reality this is generally not the case, and philosophers should just relax about it.

For instance, you may recall my discussion of so-called “Gettier cases” in the context of a treatment of the concept of knowledge. Ever since Plato, knowledge has been defined as justified true belief. Then came Edmund Gettier, who in 1963 published a paper showing that there are instances of what one should consider knowledge and that yet do not seem to agree well with Plato’s definition (read my original post if you are interested in the details). I do consider this an example of (minor) progress in philosophy, because as a result of Gettier-type cases we now have a more nuanced understanding of what counts as knowledge and why. But it can easily be shown that all Gettier-type exceptions to Plato’s concept of knowledge fall into a very narrow category, and they are all very highly contrived. What would a good scientist do, when faced with such narrow anomalies? Very likely precisely what Hume did: ignore them, at least provisionally, and focus instead on the general account to see just how much it can explain before having to be refined or expanded.

It should not come as a surprise, then, that the highly sensible David Hume, whose project was precisely to turn “moral” philosophy into something more akin to natural philosophy (i.e., science) would adopt the pragmatic approach that is so effective in the latter practice. If only more contemporary philosophers were more Humean in spirit I think the whole discipline would greatly benefit. As Hume himself put it, when he happened to be temporarily overwhelmed by a hopelessly complex philosophical problem, “I dine, I play a game of back-gammon, I converse, and am merry with my friends; and when after three or four hours’ amusement, I wou’d re turn to these speculations, they appear so cold, and strain’d, and ridiculous, that I cannot find in my heart to enter into them any farther.” Cheers!

Thursday, February 13, 2014

Is Alvin Plantinga for real? Alas, it appears so

by Massimo Pigliucci

I keep hearing that Notre Dame philosopher and theologian Alvin Plantinga is a really smart guy, capable of powerfully subtle arguments about theism and Christianity. But every time I look, I am dismayed by what I see. If this is the best that theology can do, theology is in big trouble. (Well, to be fair, it has been at least since David Hume.)

Recently, Plantinga has been interviewed by another Notre Dame philosopher with theistic leanings, Gary Gutting, for the New York Time’s “Stone” blog. I often enjoy Gutting’s columns, for instance his argument for why the Pope should revisit the Catholic’s Church position on abortion. Then again, whenever Gutting veers close to theism I have no problem taking him to task either.

In this case, Gutting’s interview is reasonably well structured, and he did ask some serious questions of Plantinga. It is the latter’s performance that left me aghast. Here is why.

The first question was based on recent surveys that put the proportion of atheists among academic philosophers at around 62%, slightly above what it is for scientists (it varies from sub-discipline to sub-discipline, too). Plantinga concedes that this is problematic for theism, considering that philosophers are the ones who are most familiar with all the arguments for and against the theistic position. So what does he do? He quotes Richard Dawkins, quoting Bertrand Russell, who famously said that if he found himself in front of god after his death he would point out to him that there just wasn’t enough evidence.

And here comes Plantinga’s first non sequitur: “But lack of evidence, if indeed evidence is lacking, is no grounds for atheism. No one thinks there is good evidence for the proposition that there are an even number of stars; but also, no one thinks the right conclusion to draw is that there are an uneven number of stars. The right conclusion would instead be agnosticism.” Right, except for the not-so-minor detail that the priors for there to be an even or odd number of stars are nowhere near the priors for there to be or not to be a god. More on this in a second, when we come to teapots.

Following up on the above (puzzling, to say the least) response, Gutting pointed out that the analogy with “even-star-ism” is a bit odd, and that atheists would bring up instead Russell’s famous example of a teapot orbiting the sun. Should we be agnostic about that? No, says Plantinga, because we have very good reasons to reject the possibility based on what we know about teapots and what it takes to put one in orbit around the sun. Precisely! Analogously — and this was Russell’s point — we have very good reasons not to take seriously the concept of a supernatural being (see comment above about priors). To see why, let’s bring in my favorite analogy. My Facebook profile (reserved for friends and family, please follow me on Twitter…) includes the usual question about religion, to which my response is that I’m an a-theist in the same way in which I am an a-unicornist: this is not to say that I know for a fact that nowhere in the universe there are horse-like animals with a single horn on their head. Rather, it is to say that — given all I know about biology, as well as human cultural history (i.e., where the legend of unicorns came from) — I don’t think there is any reason to believe in unicorns. That most certainly doesn’t make me an agnostic about unicorns, a position that not even Plantinga would likely feel comfortable endorsing. (I am, however, for the record, agnostic about even-star-ism. So, there.)

Gutting then brings up the usual trump card of atheists: the problem of evil (which, to be precise, is actually a problem only for the Judeo-Christian-Muslim concept of god, and therefore not really an argument for atheism per se). Plantinga admits that the argument “does indeed have some strength” but responds that there are also “at least a couple of dozen good theistic arguments” so that on balance it is more rational to be a theist.

Gutting, however, had to do quite a bit more prodding to get at least one example sampled from the alleged couple dozen on offer. First off, Plantinga states very clearly that the best reason to believe in (his) god is not a rational argument at all, but the infamous sensus divinitatis of Calvinistic memory, i.e. the idea that people experience god directly as a result of “an inborn inclination to form beliefs about God.”

This is so weak that it is hardly worth rebutting, but let’s elucidate the obvious for Prof. Plantinga anyway. To begin with, it is not clear even what counts as a sensus divinitatis in the first place. Does it equate to simply believing in god? If so, the “evidence” is circular. Or does it mean that some people have had some kind of direct and tangible experience of the divine, like witnessing a miracle? In that case, I’m pretty sure the number of such experiences is far less than Plantinga would like, and at any rate plenty of people claim to have seen UFOs or having had out-of-body experiences. Neither of which is a good reason to believe in UFOs or astral projection. Lastly, we begin to have perfectly good naturalistic explanations of the sensus divinitatis, broadly construed as the projection of agency where it doesn’t belong. The latter truly seems to me a near-universal characteristic of human beings, but it is the result of a cognitive misfire, as when we immediately think that someone must have made that noise whose origin currently escapes us (ghosts? a lurking predator?). It is sensible to think that this compulsive tendency to project agency was adaptive during human history, probably saving a lot of our ancestors’ lives. Better to mistake the noise made by the wind for a predator and take cover than to dismiss the possibility out of too much skepticism and end up as the dinner entree of said predator.

So Gutting pushed a bit more: could Plantinga please give us an example of at least one good theistic argument among those several dozens he seems to think exist? Well, all right, says the esteemed theologian, how about fine tuning? That does move the discussion a bit, as the fine tuning problem is a genuine scientific issue, which has by no means been resolved by modern physics (see recent Rationally Speaking entries on related topics).

Of course invoking fine tuning in support of theism is simply a variant of the old god-of-the-gaps argument, one that is increasingly weak in the face of continuous scientific progress, an obvious observation that Gutting was smart enough to make. Besides, even if it should turn out that fine tuning is best seen as evidence of intelligent design, there are alternatives on offer, some of which are particularly problematic for Christian theists.

Plantinga does concede that god-of-the-gap arguments are a bit weak, but insists: “We no longer need the moon to explain or account for lunacy; it hardly follows that belief in the nonexistence of the moon (a-moonism?) is justified.” Wow. I think I’m going to leave this one as an exercise to the reader (hint: consider the obvious disanalogy between the moon — which everyone can plainly see — and god, which…).

Eventually, Plantinga veers back toward the (alleged, in his mind) problem of evil, and takes it head on in what I consider a philosophically suicidal fashion: “Maybe the best worlds contain free creatures some of whom sometimes do what is wrong. Indeed, maybe the best worlds contain a scenario very like the Christian story. … [insert brief recap of “the Christian story”] … I’d say a world in which this story is true would be a truly magnificent possible world. It would be so good that no world could be appreciably better. But then the best worlds contain sin and suffering.”

Seriously? The argument boils down to the fact that Plantinga, as a Christian, finds the Christian story “magnificent,” that is, aesthetically pleasing, and that’s enough to establish that this is the best of all possible worlds. Maybe it’s just me, but I don’t find a world with so much natural and human imposed suffering “magnificent” at all, and it seems to me that if an all-powerful, all-knowing, and all-good god were responsible for said world he ought to be resisted at all costs as being by far the greatest villain in the history of the universe. But that’s just me.

Moving on, Gutting at one point asks Plantinga why — if belief in atheism is so questionable on rational grounds — so many philosophers, i.e. people trained in the analysis of rational arguments, cling to atheism. Plantinga admits to not being a psychologist, but ventures to propose that perhaps atheists reject the idea of god because they value too much their privacy and autonomy: “God would know my every thought long before I thought it. … my actions and even my thoughts would be a constant subject of judgment and evaluation.” Well, I’m no psychologist either, but by the same token theists like Plantinga (and Gutting, let’s not forget) delude themselves into believing in god because they really like the idea of being judged every moment (especially about what they do in the non-privacy of their bedrooms) and much prefer to be puppets in the hands of a cosmic puppeteer. Okay, suit yourselves, boys, just don’t pretend that your psychological quirks amount to rational arguments.

And we then come to “materialism,” which Gutting thinks is a “primary motive” for being an atheist. Here things get (mildly) interesting, because Plantinga launches his well known attack against materialism, suggesting that evolution (of all notions!) is incompatible with materialism.

Come again, you say? Here’s is the “argument” (I’m using the term loosely, and very charitably). How is it possible, asks the eminent theologian, that we are material beings, and yet are capable of beliefs, which are clearly immaterial? To quote:

“My belief that Marcel Proust is more subtle that Louis L’Amour, for example? Presumably this belief would have to be a material structure in my brain, say a collection of neurons that sends electrical impulses to other such structures as well as to nerves and muscles, and receives electrical impulses from other structures. But in addition to such neurophysiological properties, this structure, if it is a belief, would also have to have a content: It would have, say, to be the belief that Proust is more subtle than L’Amour.”

This, of course, is an old chestnut in philosophy of mind, which would take us into much too long a detour (but in case you are interested, check this). There are, however, at least two very basic things to note here. First, a materialist would not say that a belief is a material structure in the brain, but rather that beliefs are instantiated by given material structures in the brain. This is no different from saying that numbers, for instance, are concepts that are thought of by human beings by means of their brains, they are not material structures in human brains. Second, as the analogy with numbers may have hinted at, a naturalist (as opposed to a materialist, which is a sub-set of naturalist positions) has no problem allowing for some kind of ontological status for non-material things, like beliefs, concepts, numbers and so on. Needless to say, this is not at all a concession to the supernaturalist, and it is a position commonly held by a number of philosophers.

Plantinga goes on with his philosophy of mind 101 lesson and states that the real problem is not with the existence of beliefs per se, but rather with the fact that beliefs cause actions. He brings up the standard example of having a belief that there is some beer in the fridge, which — together with the desire (another non-material thingy, instantiated in another part of the brain!) to quench one’s thirst — somehow triggers the action of getting up from the darn couch, walk to the fridge, and fetch the beer (presumably, to get right back to the couch). Again, the full quote so you don’t think I’m making things up:

“It’s by virtue of its material, neurophysiological properties that a belief causes the action. It’s in virtue of those electrical signals sent via efferent nerves to the relevant muscles, that the belief about the beer in the fridge causes me to go to the fridge. It is not by virtue of the content (there is a beer in the fridge) the belief has.”

But of course the content of the belief is also such in virtue of particular electrical signals in the brain. If those signals were different we would have a different belief, say that there is no beer in the fridge. Or is Plantinga suggesting that it is somehow the presence of god that gives content to our beliefs? And how, exactly, would that work anyway?

Whatever, you may say, didn’t I mention something about evolution above? Yes, I’m coming to that. Here is Plantinga again, after Gutting suggested that perhaps we get a reasonable correspondence between beliefs and action because natural selection eliminated people whose brains were wired so to persistently equip them with the wrong belief (i.e., believing that the beer is in the refrigerator, when it’s not because you already drank yourself into oblivion last night):

“Evolution will select for belief-producing processes that produce beliefs with adaptive neurophysiological properties, but not for belief-producing processes that produce true beliefs. Given materialism and evolution, any particular belief is as likely to be false as true.”

The first part of this is true enough, and consistent with the fact that we do, indeed, get a lot of our natural beliefs wrong. To pick just one example among many, most people, for most of human history, believed that they were living on a flat surface. It took the sophistication of science to show otherwise (so much for the “science is just commonsense writ large” sort of platitude). It is the last part of Plantinga’s statement that is bizarre: 50-50 chances that our beliefs are true or false, given materialism and evolution? Where the heck do those priors come from?

But it gets worse: “If a belief is as likely to be false as to be true, we’d have to say the probability that any particular belief is true is about 50 percent. Now suppose we had a total of 100 independent beliefs (of course, we have many more). Remember that the probability that all of a group of beliefs are true is the multiplication of all their individual probabilities. Even if we set a fairly low bar for reliability — say, that at least two-thirds (67 percent) of our beliefs are true — our overall reliability, given materialism and evolution, is exceedingly low: something like 0.0004. So if you accept both materialism and evolution, you have good reason to believe that your belief-producing faculties are not reliable.”

Again, wow. Just, wow. This is reminiscent of the type of silly “calculations” that creationists do to “demonstrate” that the likelihood of evolution producing a complex structure like the human eye is less than that of a tornado going through a junkyard and assembling a perfectly functional Boeing 747 (the original analogy is actually due to physicist Fred Hoyle, which doesn’t make it any better).

The chief thing that is wrong with Plantinga’s account is that our beliefs are far from being independent of each other. Indeed, human progress in terms of both scientific and otherwise (e.g., mathematical) knowledge depends crucially on the fact that we continuously build (and revise, when necessary) on previously held beliefs. In fact, there is an analogous reason why the tornado in the junkyard objection doesn’t work: natural selection too builds on previous results, so that calculating the probability of a number of independent mutations occurring by chance in the right order is a pointless exercise, and moreover one that betrays the “reasoner's utter incomprehension of the theory of evolution. Just like Plantinga apparently knows little about epistemology.

So, to recap, Plantinga’s best “arguments” are: we don’t have a scientific explanation for the apparent fine tuning of the universe (true, so?); we don’t have a philosophical account and/or a scientific explanation of the problem of “aboutness” in philosophy of mind (again, true, so?); some people claim to have a mysterious sensus divinitatis (oh boy). Therefore, not only god, but the Christian god in particular, exists. Equipped with that sort of reasoning, I’m afraid Plantinga would fail my introductory critical thinking class. But he is a great theologian.

Monday, February 10, 2014

Rationally Speaking podcast: Max Tegmark and the Mathematical Universe Hypothesis

Those among us who loathed high school calculus might feel some trepidation at the premise in this week's episode of Rationally Speaking. MIT Physicist Max Tegmark joins us to talk about his book "Our Mathematical Universe: My Quest for the Ultimate Nature of Reality" in which he explains the controversial argument that everything around us is "made of math."

Max, Massimo and Julia explore the arguments for such a theory, how it could be tested, and what it even means.

Max's pick: "Surely You're Joking, Mr. Feynman! Adventures of a Curious Character."

Friday, February 07, 2014

Massimo's suggested readings for the weekend

* The fascinating mess that is contemporary fundamental physics.

* Are we close to an awful Gattaca-type scenario for the future of humanity?

* How to pick among experts who disagree.

* CBT beats the crap out of psychoanalysis, at least in the case of bulimia.

* In defense of... me! By a theist!! (Oh boy, the New Atheists are really gonna be pissed off now.)

* Julian Baggini on the philosophy of food.

* The age of infopolitics and our digital selfs.

* The Pope should rethink the Catholic Church's stand on abortion, says Catholic philosopher.

* Kiss me, I'm an atheist. The type of PR the atheist movement really needs.

* Happiness and its discontents, a critique of our obsession with it.

* Fifty States of Fear: why Americans are being encouraged to being afraid of the wrong things.

Wednesday, February 05, 2014

On Coyne, Harris, and PZ (with thanks to Dennett)

by Massimo Pigliucci

Oh dear, I pissed off the big shots among the New Atheists — again. If you are on Twitter or happen to have checked a couple of prominent NA blogs recently, you will have noticed a chorus comprised of none other than Jerry Coyne, Sam Harris, PZ Myers and, by way of only a passing snarky comment, Richard Dawkins — all focused on yours truly. I’m flattered, but what could I have possibly done to generate such a concerted reaction all of a sudden?

Two things: I have published this cartoon concerning Sam Harris, just to poke a bit of (I thought harmless, good humored, even!) fun at the guy, and — more substantively — this technical, peer reviewed, paper in a philosophy journal devoted to a conceptual analysis and criticism of the NA movement, from the point of view of a scientist, philosopher, and, incidentally, atheist. (The same issue of that journal carries a number of other commentaries, from theists and atheists alike.)

I watched the Twitter/blog mini-storm with some amusement (decades in the academy have forced me to develop a rather thick skin). The event was characterized by the usual back and forth between people who agreed with me (thank you) and those who don’t (thank you, unless your comments were of the assholic type). I thought there was no point in responding, since there was very little substance to the posts themselves. But then I realized that the mini-storm was making precisely my point: the whole episode seemed to be a huge instance of much ado about nothing, but nasty. So I decided a counter-commentary might be helpful after all. Here it is, organized by the three major authors who have lashed out at me in such an amusing way. I’ll start with a point-by-point response to Coyne’s longest blog post, followed by a more cursory commentary on PZ (who actually makes most sense out of the whole bunch, and indeed was himself mentioned only in passing in my paper), and ending of course with Harris, in whose case I will simply let Dan Dennett (another NA, did you know?) do the job for me. (If, however, you are tired of the somewhat childish back and forth, however, by all means skip to part IV below.)

Part I: Coyne

Jerry begins thus: “when I have read Massimo’s site, Rationally Speaking, I’ve been put off by his arrogance, attack-dogishness (if you want a strident atheist, look no further than Massimo), and his repeated criticisms of New Atheists because We Don’t Know Enough Philosophy.”

While I plead guilty to the latter charge, to be accused of arrogance, attack-dogishness and stridency by Jerry Coyne, of all people, is ironic indeed. Please, go ahead and read my critical paper, compare it with what Jerry wrote, and then measure the two against your own scale of arrogance, attack-dogishness and stridency. Let me know the results.

“He has just published a strong attack on New Atheists (mentioning me, albeit briefly)” — It wasn’t an “attack,” Jerry, it was a criticism, though apparently you and other (though not all) NA's can’t see the difference anymore. And were you disappointed that I mentioned you only briefly? I apologize, I’m trying to make amends now.

“It’s a nasty piece of work: mean-spirited and misguided. It’s also, I suspect, motivated by Pigliucci’s jealousy of how the New Atheists get more attention and sell more books than he does” — First, see my comment above along the lines of the pot calling the kettle black. Second, accusing someone of jealousy is surely a despicable type of ad hominem, and it is easily refuted on empirical grounds. If the motivation for my criticisms truly was jealousy of people who sell more books than I do, why on earth would I praise Dennett, or Sean Carroll, or plenty of other best selling authors I write about on my blog or interview on my podcast? Could it be that my focus on Harris & co. is the result of actual, substantive, disagreements with their positions, and not stemming from personal rancor?

“I have to say that the paper just drips and seethes with jealousy and the feeling that Pigliucci considers himself neglected because philosophy is marginalized by New Atheists.” — Another example of just how dripping and seething Coyne himself can be, though I’m pretty sure he isn’t jealous of me, at least.

Jerry notes that I mention Hitchens, another prominent NA, only in passing, adding “why did he mention Trotsky and Iraq rather than, say, Mother Teresa or the Elgin Marbles? And of course the phrase ‘notoriously excelled’ is simply a gratuitous slur.” I mentioned Trotsky and Iraq because I wanted to make the point that someone who swings that far in opposite directions on political grounds is more of a (incoherent) polemicist than anything else, and Mother Theresa simply had nothing to do with it. As for my phrase being a gratuitous slur, I can certainly see how it could be interpreted that way. Or it could be taken as an accurate description of Hitchens’ writing career.

Commenting on a specific paragraph from my paper Jerry then adds: “it’s simply wrong to claim that a) believers don’t see God as a real entity who interacts with the world in certain ways (making that a hypothesis), and b). that one can’t test the supernatural, an old and false argument often used by Eugenie Scott. In fact, believers are constantly adducing ‘evidence’ for God, be it Alvin Plantinga’s claim that our senses couldn’t detect truth without their having been given us by god.”

But I had made neither claim, as ought to be crystal clear to anyone reading the paragraph that Jerry quoted before proceeding to completely misunderstand it. I had simply said that Dawkins et al. are wrong to consider “the God hypothesis” as anything like a scientific hypothesis (as opposed to a semi-incoherent ensemble of contradictory statements easily failing the test of reason and evidence). That is, my complaint was, and has always been, that NAs simply give too much credit to their opponents when they raise religious talk to the level of science. Coyne simply, willfully it seems to me, misread what I wrote and very plainly intended.

Along the same lines, Jerry later on adds: “they have reasons for being Christians, Jews, etc., even if those reasons are simply ‘I was brought up that way.’” Indeed. And how does that amount to a scientific hypothesis, as opposed to self-evident cultural bias?

More: “If you think the Moral Law is evidence for God, you can examine whether our primate relatives also show evidence for morality, and whether and how much of human morality really is innate. That’s science!” No, it ain’t. Does Jerry truly not see that the believer can simply say that the observation of prosocial behavior in other primates is no contradiction of the statement that God gave us the Moral Law? And does he truly not see the difference between morality (a complex set of behaviors and concepts that require language and cultural evolution) and mere prosociality (which we share with a number of other species, including several non-primates)? Incidentally, the fact that the latter was likely the evolutionary antecedent of the former (which I think is very reasonable to believe) in no way undermines the idea that there is an important distinction between the two.

“Science deals with the supernatural all the time. What else are scientific investigations of ESP and other paranormal phenomena, or studies of ‘spiritual healing’ and intercessory prayer?” Yes and no. First of all, there is nothing inherently supernatural in claims of telepathy and the like. The occurrences, if real, could simply be the result of unknown natural phenomena. Second, yes, we have tested the effects of intercessory prayer, and of course have come up empty-handed. But what always struck me as bizarre about such experiments is how ill-conceived they are. They couldn’t possibly be testing for supernatural effects mediated by a God who would presumably know that we mere mortals are about to test His power. Why would He lend himself to such games? And if we had, in fact, discovered an effect, I bet atheists (myself and Jerry included) would have immediately offered alternative, naturalistic explanations, along the lines of Arthur Clarke’s famous Third Law.

Next: “‘most of the New Atheists haven’t read a philosophy paper’? I seriously doubt that. I won’t defend myself on this count, for I’ve read many, and so, I suspect, have Dawkins, Harris, Stenger, and others seen as important New Atheists.” Well, I take Jerry at his word, though his philosophy readings surely don’t seep through his blog in any clear way. I know Harris has read some philosophy as an undergraduate, but has clearly not understood it (this isn’t a gratuitous statement, just a conclusion derived from having spent far too much time reading what Harris has wrritten. As you’ll see below, Dan Dennett agrees with me, and then some!). As for Dawkins, I’ve met him several times, the last time at the naturalism workshop organized by Sean Carroll and he has plainly told me that he doesn’t read philosophy.

“The charge of anti-intellectualism is snobbish, and what Pigliucci means by it is that New Atheists harbor a ‘lack of respect’ for his field: philosophy.” — This constantly amazes me, especially coming from Jerry, who really ought to know better. I would perhaps understand his comment if I were a philosopher with no science background, presumably just envious of the prestige of science. But I am also a scientist, indeed with a specialization in Jerry’s own discipline of evolutionary biology. How, then, could it possibly make sense to accuse me of wanting to defend “my” field from encroachment from, ahem, “my other” field??

Now, not all this sniping is entirely wasted, for Jerry and I certainly agree on the following: “What’s important is to distinguish those disciplines that enforce reasons for believing in things (disciplines like science, math, and philosophy) from those that don’t (postmodern literary criticism, theology, etc.),” which you would think ought to be more than enough for the two of us to find common ground. It’s really unfortunate that it isn't.

Jerry continues by giving me some credit for a broader view of knowledge — what used to be called scientia, which would actually go a long way toward reconciling our diverging views. But then says: “This is pretty much o.k. except that Pigluicci [sic] includes ‘arts’ and ‘first- person experience,’ with ‘scientia’ as ways of understanding. ‘First-person experience,’ of course, includes the many forms of revelation used to justify the existence of God, and while ‘arts’ are ways of ‘feeling,’ it’s arguable about whether the kind of understanding they yield is equivalent to the kind of understanding produced by physics and philosophy, or, for that matter, by revelation.” Except that I most explicitly do no such thing! In a long essay in Aeon, where I expand on this, I make a distinction between knowledge and understanding, and very clearly say that scientia is about knowledge, while the arts, the humanities and first-person experience — together with knowledge — form understanding. How could Jerry so blatantly confuse the two, or fail to get the not at all subtle distinction I was trying to make?

Toward the end of Jerry’s rant we get to a downright surreal turn: “I was once favorably disposed to Pigliucci.” Seriously? When, exactly? Either Coyne is lying or he has a very short memory. Indeed, our disagreements and discord date from way before either of us started writing publicly about atheism and related matters. It goes back to Jerry’s conservative take on the state of evolutionary theory, where he is a staunch defender of the so-called Modern Synthesis (of the 1920s through ‘40s), while I and others have advocated what we refer to as an Extended Synthesis that takes seriously the many empirical and conceptual advances in biology over the past six decades (instead of treating them as cherries added as decorations onto the already finished cake).

But the problem is that Jerry is obviously just not reading very carefully what I’m writing, reaching for his keyboard instead as a straight result of a knee-jerk reaction. Otherwise he wouldn’t complain: “if New Atheism has been such a miserable failure, why does Pigliucci admit this?” going on to cite me as saying that NA books have been very successful. Does Coyne not realize that number of books sold isn’t the only, or in some cases the most important, measure of “success?” Because if he doesn’t, then he ought to wake up to the realization that Deepak Chopra and Oprah Winfrey have probably outsold all the NAs combined. I was talking about what I see as a conceptual failure of the NA movement (remember, the kerfuffle is about a technical paper published in a somewhat obscure philosophy journal!), not whatever it is Jerry thinks I was talking about.

The last thing to notice is that Jerry managed to misspell my name a whopping eight times. He really doesn’t like me!

Part II: PZ

Let us now turn to the far shorter (and much less nasty) post by PZ, rather amusingly entitled “Philosophism,” which is PZ’s counter to the accusation of scientism. And he is, of course, right. Some philosophers are surely guilty of philosophism, just like some scientists are guilty of scientism. The irony here is that when I got into this business I thought (very naively, as it turned out)  that my new colleagues in philosophy would be glad to have a member of their profession who was also a scientist, and that my colleagues in science would regard me as one of their own who might be a trusty bridge to the “other culture” (as C.P. Snow famously put it). What happened instead, with a few exceptions, is that philosophers tend to consider me too much of a scientist, while scientists consider me too much of a philosopher. Life, don’t talk to me about life

At any rate, on the issue of scientism — and of the role of philosophy — there is much that PZ and I agree on. He correctly notes his criticisms of people like Krauss, Hawking and Pinker, for instance. It is not, however, correct to say that “Krauss has retracted his sentiments,” as anyone can plainly see by reading his non-apology (prompted by Dan Dennett) in Scientific American. PZ also wonders why I don’t mention Pinker in my NA paper, which is strange, since the paper is about the foremost figures who have initiated and defined NA, and Pinker — as brilliant and controversial a writer as he is — is simply not among them.

PZ more generally accuses me of cherry picking, sparing from my criticisms in the paper people like Susan Jacoby, David Silverman, Hemant Mehta, Greta Christina, Ibn Warriq, Ophelia Benson. But, again, with all due respect to all of these people, they aren’t the founding fathers (yeah, they were all old white men) of NA, nor have they been quite as influential in terms of the public face of the movement — at least in terms of the Coynian ultimately meaningful measure of number of books sold.

PZ is correct to point out that there is indeed a range of attitudes toward philosophy among atheists, and he is a prime example. But, again, this is simply not the case, by and large, where my big targets are concerned, despite his contention that Stenger's (again, not one of the founding fathers) work is full of history and philosophy. History yes, philosophy, not really.

Toward the end of his post PZ tells his readers that I have two criteria for criticism in mind: “1) We’re popular. That’s an accusation that has me stumped; would we be more respectable if nobody liked us at all? 2) We’re scientists and take a scientific approach. Well, we’re not all scientists, and what’s wrong with looking at an issue using evidence and reason?”

(1) is, of course, another example of Coyne’s confusion between popularity and soundness of ideas. I’m not accusing the NAs of being popular. They obviously are, and good for them. I’m accusing (some of) them of being sloppy thinkers when it comes to the implications of atheism and of a scientific worldview.

As for (2), I never said that all the NAs are scientists (indeed, my paper explicitly excluded Hitchens from the analysis on precisely those grounds — which as we’ve seen didn’t please Coyne). But a major point of the paper was to discuss what I see as a tendency of NA qua movement (i.e., founding fathers and many followers) toward scientism, a tendency that has been codified precisely by the sciency types among the NA (it surfaces very clearly in the many comments that both Coyne and PZ got to this latest round of posts). Finally, of course there is nothing at all wrong with looking at an issue using evidence or reason, nor am I aware of ever having written anything to that effect.

Part III: Harris

And now let’s get to Sam Harris. Readers of this blog know exactly what I think of him as an intellectual (I have no opinion of him as a person, since I’ve never met him). But what follows is a (long, apologies) list of quotes from a single review of Harris’ latest effort, his booklet on free will, penned by non other than Dan Dennett. While I have to admit to being human and having therefore felt a significant amount of vindication reading what Dan had to say on this, I reprint representative passages below to make three points:

I am clearly not the only one to think that Harris’ philosophical forays are conceptually confused, to say the least. Please notice the mercilessly sarcastic tone adopted by Dennett throughout. This is at least as heavy an attack as anything I’ve written about Harris, and arguably much more so. But, do you think Dennett has therefore been excoriated by Harris, Coyne & co. for his message or the form in which it was delivered? (Yeah, that was a rhetorical question, glad you got it.)

These quotes, of course, do not constitute Dennett’s argument (for that you’ll have to read his full, long, essay). But they are representative of why I think Dan has been much harsher than I have been with Harris (for good reasons, in my mind). Incidentally, Dennett includes the following people as others who hold ideas similar to Harris’ and are equally misguided: Wolf Singer, Chris Frith, Steven Pinker, Paul Bloom, Stephen Hawking, Albert Einstein (!), Jerry Coyne, and Richard Dawkins. Here is the man himself:

I think we have made some progress in philosophy of late, and Harris and others need to do their homework if they want to engage with the best thought on the topic.

I would hope that Harris would pause at this point to wonder—just wonder—whether maybe his philosophical colleagues had seen some points that had somehow escaped him in his canvassing of compatibilism. As I tell my undergraduate students…

There are mad dog reductionist neuroscientists and philosophers who insist that minds are illusions, pains are illusions, dreams are illusions, ideas are illusions—all there is is just neurons and glia and the like.

Again, the popular notion of free will is a mess; we knew that long before Harris sat down to write his book.

These are not new ideas. For instance I have defended them explicitly in 1978, 1984, and 2003. I wish Harris had noticed that he contradicts them here, and I’m curious to learn how he proposes to counter my arguments.

Harris should take more seriously the various tensions he sets up in this passage. It is wise to hold people responsible, he says, even though they are not responsible, not really.

There are complications with all this, but Harris doesn’t even look at the surface of these issues.

The rhetorical move here is well-known, but indefensible.

Even the simplest and most straightforward of Harris’s examples wilt under careful scrutiny.

If this isn’t pure Cartesianism, I don’t know what it is. His prefrontal cortex is part of the I in question. Notice that if we replace the “conscious witness” with “my brain” we turn an apparent truth into an obvious falsehood: “My brain can no more initiate events in my prefrontal cortex than it can cause my heart to beat.”

There are more passages that exhibit this curious tactic of heaping scorn on daft doctrines of his own devising while ignoring reasonable compatibilist versions of the same ideas.

If Harris is arguing against it, he is not finding a “deep” problem with compatibilism but a shallow problem with his incompatibilist vision of free will; he has taken on a straw man, and the straw man is beating him.

Once again, Harris is ignoring a large and distinguished literature that defends this claim.

His book also seems to have influenced his own beliefs and desires: writing it has blinded him to alternatives that he really ought to have considered.

I have thought long and hard about this passage, and I am still not sure I understand it, since it seems to be at war with itself.

Harris notes that the voluntary/involuntary distinction is a valuable one, but doesn’t consider that it might be part of the foundation of our moral and legal understanding of free will. Why not? Because he is so intent on bashing a caricature doctrine.

Here again Harris is taking an everyday, folk notion of authorship and inflating it into metaphysical nonsense.

Entirely missing from Harris’s account—and it is not a lacuna that can be repaired—is any acknowledgment of the morally important difference between normal people (like you and me and Harris, in all likelihood) and people with serious deficiencies in self-control.

I cannot resist ending this catalogue of mistakes with the one that I find most glaring: the cover of Harris’s little book, which shows marionette strings hanging down. … Please, Sam, don’t feed the bugbears.

I think I've made my point. Or, rather, Dennett did.

Part IV: Pars Construens

Francis Bacon, arguably the father of modern philosophy of science, wrote in his New Organum (1620, a polemical response to Aristotle’s famous Organum) that every philosophical project better have two parts: the pars destruens, where you should clearly state what is wrong with some other position you want to overcome, and the pars construens, where you present your own alternative views.

Much of what you’ve read so far is, of course, pars destruens. My pars construens has actually been presented before, in a number of essays on this blog, as well as in a couple of my books, and in the Aeon piece mentioned above. Still, it may be worth summarizing:

On science and/vs philosophy: I consider both science and philosophy to be intellectually serious disciplines, with much to tell to each other. Just in the way I have little patience for scientists who are ignorant and/or dismissive of philosophy, I have little patience for philosophers who are ignorant of and/or dismiss science.

On what counts as knowledge: I distinguish between disciplines / approaches that contribute to our knowledge (in the intellectual sense of the term) and those that contribute to our understanding (both of that knowledge and of life in general). The first group includes science, philosophy, logic and math, and I use the above mentioned umbrella term scientia for it, from the Latin word meaning “knowledge” in the broad sense. The second group includes literature, the arts and other humanities. The relationship between the two groups is helped / mediated by bridge areas, such as history and social science. I don’t pretend this to be the ultimate model of human knowledge / understanding, it is simply my constructive way to push for what I see as a healthy disciplinary pluralism.

On ethics and morality: I think ethics is a branch of philosophy that has to be informed by factual evidence (“science”) as much as possible, but I do think there is a pretty serious distinction between “is” and “ought” (despite some permeability of that famous boundary). I do think science can and does illuminate the origins (evolution) and the material basis (neurobio) of ethical thinking. Just like it can illuminate the origins and neural basis of mathematical thinking, without this resulting in the treatment of mathematics as a branch of evolutionary or neuro-biology.

On the nature of science: I think science is a particular type of historically situated epistemic-social enterprise, and that to attempt to enlarge its domain to encompass  “reason” as a whole is historically, sociologically and intellectually misguided, and it does a disservice to science itself.

On religion and the New Atheism: I am an atheist, and I am not shy about criticizing religion. But I like to do that in what I perceive as an intellectually honest and rigorous way. I am clearly not above harshly criticizing other people’s positions, but I try to do it constructively. My problem with the New Atheism is that there is little new in it, that it tends to be more loud than constructive, and that it has a tendency toward science-worshiping. Oh, and I think I have a right (perhaps even an intellectual duty) to criticize big boys who I think need to be criticized.

On atheism and social issues: I do not believe that atheism entails much else other than a (eminently reasonable!) negative metaphysical position (i.e., the denial of the idea that we have good reasons to believe in supernatural entities). As such, I am skeptical of “Atheism+” sort of efforts when they go beyond the obviously germane issue of separation of Church and State and the like. Of course, I do agree with many of the progressive social goals that are pushed by PZ, Coyne and others. I just think we have already been doing that for a long time under the banner of (the philosophy of) secular humanism — so  it's another example of people appearing to think they’ve come up with something new while they are in fact simply placing their label onto something that others have been doing (quite well) for a long time.

This has been far too long. ‘Till the next one, folks.