About Rationally Speaking

Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Tuesday, January 31, 2012

Rationally Speaking podcast: Parapsychology

In this episode, Massimo and Julia take on parapsychology, the study of phenomena such as extrasensory perception, precognition, and remote viewing.

Its practitioners claim that there is more evidence for it than there is for other areas of scientific inquiry, such as string theory for which there is no empirical data at all. Yet string theory is taken seriously as a science whereas parapsychology is not.

So, what is the scientific status of parapsychology? What does the best academic literature on the subject tell us? Finally, what can we learn from parapsychology about the practice of science in general?

Sunday, January 29, 2012

Some observations on the “free will” wars

by Ian Pollock


It has been interesting to view the exchanges on free will (more neutrally, volition) between Massimo, Jerry Coyne, and the readers of both blogs. I felt like chiming in when I read this in Massimo’s latest sortie:

“...And there are very decent philosophical arguments against determinism (and reductionism, which is also implied by this sort of claim)”
This fits in with my impression that many see incompatibilist determinism a la Jerry Coyne as either “reductionism gone mad,” or, putting a positive spin on it, the logical consequence of reductionism applied to human brains.

I confess myself perplexed by this, because it seems to me that the intuitions driving incompatibilism stem from absent or insufficiently applied reductionism. Let me try to explain.

Let’s start with Jerry’s “practical test” of free will:
If you were put in the same position twice — if the tape of your life could be rewound to the exact moment when you made a decision, with every circumstance leading up to that moment the same and all the molecules in the universe aligned in the same way — you could have chosen differently.
To see the problem with this test, suppose we are interested in a different question: whether Alice loves Bob. I propose as a practical test of this proposition: “What you need to do is take a look at Alice’s brain and see if areas associated with Bob display amorous patterns of neural firing.”

The obvious main problem with my “practical test” of love is that although it’s couched in sciencey language, the entire question of whether Alice loves Bob has been transfered to the word “amorous,” which has still not been reduced to something well-defined and testable. Explanatorily, we are no better off than we were before. Of course, the mistake in my test is trivially easy to see, but the mistake in Jerry’s test of free will is almost as obvious. “Choice” and “free will” and “volition” are damn near synonyms, so although a dictionary may reference “choice” in its definition of “free will,” a scientific test should never do such a thing. Likewise, "could" is a concept at the very heart of the matter! Jerry’s test of free will — “you could have chosen differently” — is not nearly reductionist enough.*

So how would I tackle the issue of free will/volition?

Suppose I am driving along an undivided highway when the stray thought comes into my head that I could steer into the opposing lane, resulting in a horrible, deadly accident.

Of course, I don’t do so, because... well, I like living and I don’t much want to kill others, either. And I just washed my car. But I could have done it....

Wait, was I right to say that I could have done it?

Yes and no. As we have seen, the pivotal word in that sentence is “could,” and “could” has at least two meanings that are relevant to the question of free will.

Meaning #1 maps physical possibility, and in this case returns the clear answer “No, the physical state of the universe was such that you could not have steered into oncoming traffic, as evidenced by the fact that you did not, in fact, do so. QED.” Jerry sees this clearly, and I have absolutely no argument with him.

Meaning #2 of “could” maps counterfactual statements. To say that you “could” have done something in this sense is (roughly) to say that IF circumstances had been otherwise, a different outcome would have resulted. Meaning #2 returns the answer “Yes, you could have steered into oncoming traffic, if you had wanted to.”

Meaning #2 is what people actually mean by “could,” most of the time.

If you’ve been sleeping through this post, pay attention now, because the entire click of compatibilism lies in this realization.

Proposition #1: “No, the state of the universe was such that it was physically impossible for you to have steered into oncoming traffic.”

Proposition #2: “Yes, you could have steered into oncoming traffic (if you had wanted to).”

These two propositions are both true in my example. THAT is the essence of compatibilism.

Also note the very important fact that “wanting to” corresponds to a different physical state than “not wanting to.”

These propositions look incompatible because people (especially incompatibilists!) have an annoying tendency to forget about the implicit counterfactual “if” clause in proposition #2.**

Now we are in a position to see that incompatibilism is basically a huge equivocation fallacy. The incompatibilists prove Proposition #1, then assume that therefore, Proposition #2 is proven false. But this does not follow.

Sadly, those who wish to defend free will/volition seem tempted to deny Proposition #1, often by arguing against determinism and reductionism in very implausible ways. I think this is crazy, but I am not going to argue with them here, in the interests of maintaining a coherent stream of thought.

The main point I want to make is that incompatibilist determinists like Jerry are in some sense still in thrall to the dualistic ideas of their culture, although they have explicitly rejected them.

Dan Dennett is fond of repeating this great quote from Lee Siegel, who wrote a book on Indian street magic.
"I'm writing a book on magic," I explain, and I'm asked, "Real magic?" By 'real magic' people mean miracles, thaumaturgical acts, and supernatural powers. "No," I answer: "Conjuring tricks, not real magic." Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic.
Now consider this passage from Jerry Coyne’s USA Today article:
The ineluctable scientific conclusion is that although we feel that we're characters in the play of our lives, rewriting our parts as we go along, in reality we’re puppets performing scripted parts written by the laws of physics. Most people find that idea intolerable, so powerful is our illusion that we really do make choices. (my emphasis).
But um, Jerry, we do actually make choices, right? Don’t we? I mean, not in some amazingly deep philosophically or morally fraught sense of choice, as in “But did Hitler really have a choice to not be a monster?”, but in a basic, boring, everyday sense, as in “Do you want Froot Loops or muesli?” Surely you talk this way too, when you go home?

I think Jerry would concede that we do make such choices, but insist that they aren’t “real” choices. Well, what is a “real” choice as distinct from an unreal one? Like in the case of magic, it would appear that according to Jerry and other incompatibilists, “real choice” refers to the choices that are not real (i.e., don’t actually happen because they require supernatural powers), while the choice that is real — that can, y’know, actually be done — is not. real. choice.

And yet I would bet a large sum of money that Jerry et al. are perfectly willing to use the language of choice in their daily lives, as soon as they’ve forgotten about the day’s blogo-philosophizing. This is not just because choice is a powerful illusion (which would presumbably be their preferred rationalization) — it’s because the concept of “choice” cuts reality at the joints. Choice is one of the most important things that the human brain does; arguably, the brain’s ability to model the world and choose from alternative actions IS its survival value.

A half-reductionist would look at the concept of choice, experience the usual dualistic intuitions about it, then conclude that since dualism is false, choice must be an illusion. Hence the saying (which I just invented): a little bit of reductionism is a dangerous thing.

A good reductionist would look at this incredibly useful concept of “choice” and then try to figure out how it fits into the determined physical universe. Eventually, they would conclude that choice is a physical process like eating or breathing or thinking. As Gary Drescher says in the perfect expression of this insight:
Choice…is a mechanical process compatible with determinism... The objection "The agent didn’t really make a choice, because the outcome was already predetermined" is as much a non sequitur as the objection "The motor didn’t really exert force, because the outcome was already predetermined."
One final note: I have tried to interpret Jerry’s opinions as faithfully as possible, but I hope he will pardon me and let me know if he feels I have put words into his mouth. In truth so much has been written on this topic recently that it gets hard to keep people's opinions straight!


* Unlike others, I have absolutely no problem with the fact that Jerry’s test would only be doable in principle, not in practice. Such thought experiments are extremely useful for all sorts of things.

** Of course, the counterfactual “if” can reference lots of different factors besides the desires of the agent. But this example does a nice job of showing that what prevents you from doing X is not necessarily a pernicious outside influence.

Thursday, January 26, 2012

On free will, response to readers

by Massimo Pigliucci

It has been interesting reading through the (at last count) 104 comments on my recent post concerning Jerry Coyne’s take on free will. The post has been viewed (again, so far) 5,660 times, which puts it in 6th place in the all-time ranking of Rationally Speaking entries (interestingly, number 4 is also about Jerry, concerning his changing views on the relationship between science and supernaturalism). Some recurring themes have emerged from that thread which seem worthy of further discussion.

One of the things I had pointed out is that there seems to be a clear inconsistency in the writings of several people who deny free will, since they also regularly add that it is good that we realize how things really are, because this is going to improve our lives, behaviors etc. Some readers thought there was no contradiction. For instance, here is what pin pin said:

> “Haves” and “oughts” and “shoulds” are exhortations that can change the desires of the people you are exhorting. If you think a certain set of people have bad desires (i.e. desires that would make the world a worse place), you can try to use moral language to mold those desires into better desires. <

As a matter of empirical psychology this is certainly the case, but I think there is an equivocation about the word “change” here. Does this mean that people have a choice of some sort, or simply that we are all Pavlovian automata that can be conditioned to do whatever the environment (including our fellow human beings) sets us up to do? The latter — I wager — is what Coyne, Rosenberg  et al. really mean, and yet their language simply doesn’t seem to be able to avoid volitional connotations.

Several readers of course brought up dualism, even accusing me of being a crypto-dualist. Here is Gadfly:

> If there's no Cartesian meaner, there's no Cartesian free willer. <

True enough, but this assumes that the only way to meaningfully talk about volition (again, my and others’ preferred term instead of the metaphysically loaded “free will”) is in dualistic terms, a position that has been rejected pretty much by all compatibilist philosophers, from Dennett down.

The twin “isms” of reductionism and determinism have, of course, played a major role throughout the discussion. as Matthew Putman wrote:

> Certainly science, not just neurobiology, deals with causation all of the time, and that can be carried over to notions of freewill. ... I see no reason why a physical structure such as the brain should be any different than the filled polymer system. ... When we study the brain experimentally, either with animal models, or postmortem, we find very predicable behavior of neurons, and glia cells. <

He then goes on to invoke the specter of Descartes, again. But there are several issues lurking within the above quotes. To begin with, there is a free use of the concept of causality which, as I pointed out in my original post, is far from being clear at all, and of course is most definitely extra-scientific, meaning that science can only help itself to it, not investigate it empirically. Second, it is interesting to see that Matthew cannot conceive of a significant difference between filled polymers and brains, despite the obvious fact that brains, and not filled polymers, are alive, thinking, feeling, etc. Please do not take this as an argument for vitalism, it most definitely isn’t what I mean. But I find that that line of argument is somewhat question-begging: we are trying to figure out how chunks of matter can behave in such drastically different ways from other chunks of matter, so to point out the obvious (that they are all chunks of matter) hardly helps moving the debate forward. And of course, as someone commented in response to Matthew, it is no surprise that postmortem brains are just as inert as polymers. What interests us is what happens before they become postmortem.

Gadfly also highlighted something that I took for granted, but evidently I shouldn’t have:

> Belief in free will is ALSO arguably not a scientific proposition. It certainly is no more provable right now than is the denial of free will. <

Indeed. But my beef with Coyne is that he is the one making the strong claim that free will denial is a scientific proposition. I am not at all making the symmetrical claim that affirmation of free will is demonstrated by science, only the neutral one that science has precious little (okay, pretty much nothing) to say about free will.

Which brings me to comments questioning my view of science itself. For instance, elik says:

> If I interpret correctly, you have placed counterfactual language into the realm of unscientific metaphysical speculation. I doubt you would consider statements e.g. “were it below 20 degrees yesterday, the surface of this pond would have frozen over” to be unscientific. <

No, I do not think that all counterfactual language is non scientific (to use the term “unscientific” is pejorative, and I don’t think that only science is in the business of knowledge and understanding). But I think it uncontroversial that some counterfactual reasoning has nothing to do with science (think of purely logical or mathematical questions). To consider elik’s specific example, the reason that particular counterfactual is convincing is because established science already tells us a lot about the state transitional properties of water in relation to temperature. No such knowledge is available in the case of determinism, reductionism and their implications for free will.

Along similar lines, Matthew Clark opined:

> Of course we can’t actually perform this experiment, but the deterministic claim rests on the rather robust intuition that similar causes produce similar effects. <

The crucial part here is “we can’t actually perform the experiment,” which means that we are doing philosophy, not science. And there are very decent philosophical arguments against determinism (and reductionism, which is also implied by this sort of claim). Moreover, what is at issue here is precisely whether “the same causes” are at work. Physics would have to have established causal closure in order to argue that, and it most definitely hasn’t. (Another way to put this is that everything in the universe behaves in a way that has to be compatible with the known laws of physics. This says nothing about whether those laws as we understand them comprise all there is to know about how the universe works.)

elik, along with several other readers, also asks the recurring question:

> How does quantum indeterminacy help free will, for example? <

Well, one way it may help is through two-stage models, which have been mentioned during this and a previous discussion thread. But I am not staking my agnosticism on these or any other explanation for volition, I am simply pointing out that, contra popular (in some quarters) opinion, there are options out there. (Interestingly, very few readers took me up on another possibility: that of truly emergent properties, which is yet another question that at the moment — and perhaps permanently — cannot be resolved by science. We know that there are emergent properties, but we don’t know if they appear to be so because of our epistemic limitations or because they truly do represent novel behaviors of matter when certain complexity and organizational conditions are met.)

elik (not picking on him/her, I assure you!) also used a thought experiment to argue against free will, bringing up the possibility of The Device, a machine capable of predicting the content of an essay several minutes in advance of the essay being written. Intriguing, but besides the obvious fact that such experimental demonstration hasn’t been done by anyone (again, undermining Jerry’s claim that it is science that refutes free will), this conflates predictability with free will. As my CUNY colleague Jesse Prinz pointed out during a recent roundtable on this topic, we can already predict a lot of things about how people will behave under certain circumstances using standard psychology and certainly without having to settle the question of free will.

Why, in the end, do I think there is a problem that Jerry et al. are missing or ignoring? Again, Matthew Clark:

> What we seem not to observe, given our ever increasing ability to control for causal factors in experimental situations, are inexplicable departures from these regularities. <

Of course we do observe departures from regularities, it’s called human behavior! Yes, as I mentioned above, it is predictable to a point, but it is nothing like the movement of planets or the behavior of polymers. And there is, of course, the first person experience of making decisions after deliberation. That experience constitutes data (albeit not of the controlled fashion that would make them amenable to straightforward scientific investigation), and that data that needs to be explained, not explained away. My problem with Jerry’s position is that it is a form of eliminativism, a position in philosophy (not science!) of mind made popular by Paul and Patricia Churchland. When the Churchlands provocatively say that pain “just is” the firing of neuronal C-fibers they only begin to explain the subjective experience of pain. Yes, without the C-fibers we wouldn’t feel pain, but there is a huge difference between saying that the C-fibers are necessary for feeling pain (which we could express as: other conditions ... > C-fibers >  pain) and saying that firing C-fibers are the same thing as pain (C-fibers = pain). So too with eliminativism about free will: yes, we need the laws of physics to be able to make decisions, nor can we make decisions that violate said laws. But this is not at all the same as saying that therefore decision making is an illusion brought about by physics, no more than pain is an illusion courtesy of C-fiber firing.

Monday, January 23, 2012

Considering the consequences

by Michael De Dora

I have never thought much of consequentialism, the moral theory which asserts that determining “the good” or “the moral” is a matter of measuring outcomes. Decisions about what is moral, consequentialists say, should depend on the potential or realized costs and benefits of a moral belief or action. There are myriad problems with this line of thought, and while I have already discussed several on this blog, I would like to use this post to examine in more depth what I think are the four strongest objections to consequentialism.

First, consequentialism says nothing about the substance of one’s ethic. While most consequentialists are utilitarians — a position I also consider vague and tenuous — one obviously needs only value consequences to qualify as a consequentialist. Yet, since everyone has different moral goals, everyone will have different views about potential outcomes. For reasons discussed below, consequentialism does not help us decide which are better or worse. Rather, one’s moral values come prior to consequential calculation, and help determine what one thinks about the consequences.

Second, consequences are often not at all predictable or in line with the actions that caused them. For example, does the fact that certain Muslims riot over the printing of anti-religious cartoons suggest that printing said cartoons is immoral or wrong in some other way? Not in the slightest. It only suggests people have some twisted ideas regarding free expression. Or, consider an exchange I witnessed at a recent Intelligence Squared debate. At the event, two sides of two speakers each debated the motion “The U.N. should admit Palestine as a full member state.” The side taking position against the motion argued that the audience ought to stand with them because of the potential military situation — probably started by Israel — that could be brought on as a result of the U.N.’s recognition of Palestine. Unfortunately, there was no discussion about whether such military action itself would be reasonable.

This gets at a third problem with consequentialism: it often ignores foundational questions of right and wrong for questions of expediency. Or, it ignores concerns about intent for pragmatic concerns. The question of whether a war might start due to the U.N. admitting Palestine as a full member state is an important and interesting one, but it does not answer the distinct question of whether it is right to admit Palestine to the U.N. as a full member state. Those are two different questions that must be considered separately.

Lastly, consequences must be weighed alongside other factors and possibilities. Let us examine a recent exchange on this blog. It occurred in the comments to the recent post, “Massimo’s Picks, special Hitchens edition.”

In the comment thread, Massimo wrote about his skepticism toward the effectiveness of New Atheists like Christopher Hitchens to better the public acceptance of atheism. I replied that:

“Hitchens might not have been the person best fit to sway the majority to our side, but he was part of a movement (the so-called "new atheists") that I think did do two things to help us get to the point where that’s even feasible. First, their out-front writings and speaking engagements put atheism on the forefront of the Western world's consciousness, and created the space for more widespread conversations on religion (like this one!) that were not happening here beforehand. Second, their public work encouraged many apathetic secularists and fence sitters to be more assertive and engage with the problem of religious dogmatism. I think both of these were productive first steps toward getting a majority to embrace secular thinking. And I think these two points can be accepted whether or not you agree with their arguments, or how they stated their arguments.”

Massimo replied:

“… for an allegedly evidence-driven community I hear a lot of claims about all the good that the New Atheists have done, with precious little backing up in terms of data. Are we seriously arguing that atheism wasn’t widely discussed before the Hitchens-Dawkins-Harris-Dennett books? And on what evidential grounds are you asserting that more fence sitters have been drawn inside the movement rather than repelled by the NA’s rhetoric?”

I replied by asking Massimo: “certainly atheism was being discussed long before the arrival of the New Atheists, but on such a widespread and popular scale? The NA all had best-selling books, major TV and magazine appearances, and auditoriums packed with sometimes thousands of people.” His reply: “Nobody doubts that the NA have had an impact. The question is whether it was an overall positive one.”

Massimo’s legitimate empirical question aside (any takers?), I think his last comment is most relevant to our discussion on consequentialism. Whether or not the New Atheists were effective in broadening public acceptance of secular thinking, Massimo raises the following questions: Were the New Atheists necessary to raise such recognition? Couldn’t atheism have been put on the map in some other form or fashion? Indeed, hadn’t atheists previously in human history tried other effective methods? If not, why?

The point here is that there are certainly other possibilities for fostering the kind of space atheists wanted, or an even better space. None of those possibilities were enacted, so we should be thankful for where we are right now. But that does not make what happened desirable.

Regardless, consequentialists could reply that ignoring what may be terrible consequences is unethical. Would they have a point? Consider this common thought experiment: you are a German hiding a Jewish family during World War II, and Nazi guards are at your door asking if you have seen any Jewish people lately. Do you lie to potentially save their lives? Or do you tell the truth and essentially kill the Jewish family? The point here is not that there is an easy answer between lying and not lying. The point is that the consequences — a dead family — are so compelling that they warrant consideration. And this is just one of numerous examples.

What are the implications of all this? That consequences are important, you might conclude? Not necessarily. Instead, I think we have realized only that we have a range of different values, some of which are or can be in tension among themselves. For example, in the case we just considered, we might be stuck between, on one hand, the value of honesty, and on the other, the value of human lives.

As such, perhaps consequentialism should not be looked at as an ethical system in itself — again, it is bare of ethical content — but as a way to figure out if our different ethical systems — based on duties, obligations, virtues, rights, etc. — are working properly or as intended. In other words, consequentialism might help us to see if we are securing the kind of consequences we want. And if we aren’t, it’s time to adjust our aim and try for better consequences.

Thursday, January 19, 2012

Radical reform for peer review?

by Massimo Pigliucci

A recent piece by Scott Jaschik in “Inside Higher Education” pointed out what a number of my colleagues have been thinking for a while now: the peer review system for scholarly journals doesn’t work very well, needs to be reformed, and really ought to take radical advantage of new technologies. There is, of course, going to be quite a bit of resistance to any change coming from the usual quarters, beginning with older academics who still think of social networking in terms of meeting colleagues after work for a martini (well, okay, nothing wrong with that), administrators who are used to the simple (and simplistic) bean counting operations for tenure and promotion made possible by the current system, and journal publishers who make a ton of money while adding next to nothing in value to people’s publications (after all, they don’t pay for the research, don’t pay the writers, and don’t pay the editors and reviewers — which of course doesn’t stop them from charging an arm and a leg to university libraries).

Of course, since the new technologies are making an overhaul of the system possible, and since there is widespread frustration with the current modus operandi especially among younger faculty, change will happen one way or another — witness the rise of open access and online journals that bypass traditional publishers. It’s only a question of which paths to take, and that’s where the conversation gets interesting.

The most radical suggestion mentioned in the Inside article is the one by Aaron J. Barlow, associate professor of English at the City University of New York, where I work. Barlow is quoted in the article as saying that “peer review — in the sense that people work and a consensus may emerge that a given paper is important or not — doesn’t need to take place prior to publication.” He is, of course, right and as a matter of fact most peer review has always taken place after publication. A lot of bad or simply irrelevant stuff gets published and ends up augmenting someone’s c.v. by a line or two (good for promotion and tenure!), but then dies the common death of much academic scholarship: complete lack of citations by anyone other than the author.

The question that Barlow is raising is whether it wouldn’t be better to skip the preliminary step — the pre-publication filter — and simply leave everything to the community at large. I am sympathetic to that position, particularly because as author, editor and reviewer I have seen my share of unseemly behavior, gender and racial biases, personal vendettas, and so on that certainly don’t belong anywhere within a scholarly environment. But I think pretty much everyone agrees that we already have far too much pyrite to sift through in order to find the gold nuggets, and I shudder as to what would happen if anyone were suddenly able to claim “scholarship” by simply posting their papers on the web and ask people — anyone, not just the relevant expert community? — to comment, “+1” or “like.”

This is the same problem that has been faced by the publishing and journalism industries. These days anyone can self-publish a book at the click of a button, and anyone can set up an online newspaper with free or cheap software and access to a server. But I doubt these new technological possibilities will spell the demise of editors, publishing houses and newspapers like the New York Times, for the simple reason that these “classic” outlets do exercise a very valuable (if flawed, incomplete, sometimes biased) function of filtering a lot of distracting or poor quality nonsense (as the NYT’s famous tagline says, “all the news that’s fit to print,” or to pixellate, as the case may be).

Another approach commented on in the Inside piece is the one currently pursued by Cheryl Ball, the editor of an online journal on rhetoric and technology called Kairos, and an associate professor of English at Illinois State University. Her journal engages the entire editorial board in a lengthy discussion of every submitted paper, at the end of which an editor is assigned to coach the author on how to revise the manuscript to reflect the consensus of the board. This makes the system much more transparent (the author knows that all editors participated in the discussion, no anonymity on either side) and obviously immensely constructive from the point of view of the author and the community at large. But I seriously doubt this sort of model can be expanded to the whole industry. I edit a small online open access journal in philosophy of science, and even with our low number of yearly submissions it would be impossible to get my editorial board to do what Ball has been able to accomplish with hers. Again, the problem being that there are too many authors out there, and that far too high a proportion of submitted papers is simply not up to even minimum standards, or would require a huge amount of work to get there (not to mention, of course, that — again — editors and reviewers are not paid for this, nor do they get much concrete credit from university administrations for engaging in what they do).

I do not know what the solution is, and I suspect that we will see over the next few years increased experimentation on the part of younger editors to either ameliorate the problems with the current system or to overhaul the thing altogether. Some journals already make the author, not just the reviewers, anonymous, to minimize biases (it is well known, for instance, that women and minorities get fewer papers accepted if the reviewers know their names, and that the effect disappears if authorship is kept anonymous). Others publish all submitted papers that are technically correct — meaning that are written in an intelligible manner and include all the necessary documentation — while leaving to readers to judge the intrinsic value of the authors’ findings and opinions. We certainly are on the cusp of a technologically driven revolution in academic publishing, but just as in the already mentioned cases of book publishing and journalism, it remains to be seen exactly what will be left standing and what will have arisen anew once the storm has passed.

Wednesday, January 18, 2012

Experimenting in e-Publishing

As readers of Rationally Speaking may know, there are two collections of essays pertinent to the topics covered by this blog that have been available at the Amazon Kindle store for a while: "Rationally Speaking: Skeptical Essays on Reality as We Think We Know It" includes all the essays I wrote for Rationally Speaking before it was a blog (it started out as a monthly syndicated internet column), while "Thinking About Science: Essays on the Nature of Science: 2003-2008" republishes all my essays in the homonymous Skeptical Inquirer column (still ongoing) during those years.

Since I'm always interested in new frontiers in e-publishing, I have just released both titles at Smashwords, an e-book "aggregator," as they call them these days, i.e. an outlet that allows people to publish and distribute their e-books in a variety of formats. Smashwords will soon send the two titles to the Apple iBook store and other outlets, but in the meantime you can download them directly at the site, in html, java (for browsers), mobi (for Kindle), ePub, PDF, LRF (for Sony Readers), and PDB (for Palm devices).


p.s.: as soon as I have some time (ah!) I intend to re-release "Tales of the Rational: Skeptical Essays About Nature and Science" in e-format. Stay tuned...

Tuesday, January 17, 2012

Rationally Speaking Podcast: Donald Prothero on science deniers' playbook

Guest Donald Prothero joins us to discuss the common tactics and thinking of science deniers and the implications of this assault on science for our future. The denial of scientific realities in issues like global warming, creationism, vaccine safety, and AIDS, is growing in our society. Not only is our acceptance of scientific "inconvenient truths" under attack, but even scientists themselves have been threatened.

Donald R. Prothero is Professor of Geology at Occidental College and Lecturer in Geobiology at the California Institute of Technology. He is the author, co-author, editor, or co-editor of 25 books, over 200 scientific papers and a number of popular books including, most recenly, "Catastrophes!: Earthquakes, Tsunamis, Tornadoes, and Other Earth-Shattering Disasters" and "Evolution: What the Fossils Say and Why It Matters". He is on the editorial board of Skeptic magazine and has been featured on several television documentaries, including episodes of Paleoworld (BBC), Prehistoric Monsters Revealed (History Channel), Entelodon and Hyaenodon (National Geographic Channel), and Walking with Prehistoric Beasts (BBC).

Saturday, January 14, 2012

Jerry Coyne on free will

by Massimo Pigliucci

As readers of this and of my Chicago University colleague Jerry Coyne’s blog know all too well, Jerry and I rarely see eye to eye, and seldom have any compunction in letting the world know about our disagreements. This is yet another example, which actually covers a topic that has been debated recently at Rationally Speaking. The reason I’m taking up free will again is because Jerry recently published an op-ep piece in USA Today confidently assuring his readers that they “don’t really have free will.” I think many of Jerry’s assertions are unfounded, and for interesting reasons.

Jerry starts out by teasing his readers about their alleged choice of reading his editorial (from which, of course, one deduces that he had no choice about writing it either), and continues: “So it is with all of our other choices: not one of them results from a free and conscious decision on our part. There is no freedom of choice, no free will. And those New Year’s resolutions you made? You had no choice about making them, and you’ll have no choice about whether you keep them.” This in philosophy is known as nihilism, a position that is commonly associated with Nietzsche and that has more recently valiantly been defended by Alex Rosenberg (I know, I keep promising to address his latest book, but it’s long, and it’s taking me some time to digest it).

Jerry’s aim is made clear by the following sentence: “The debate about free will, long the purview of philosophers alone, has been given new life by scientists, especially neuroscientists studying how the brain works. And what they’re finding supports the idea that free will is a complete illusion.” I think that Jerry is wrong on two counts here: first, neurobiology simply cannot settle the question of free will, no matter what the data; second, Jerry focuses on a very small subset of the pertinent neurobiological literature, interpreting it incorrectly.

Before we continue, however, let’s hear Jerry’s definition of free will: “I mean it [free will] simply as the way most people think of it: When faced with two or more alternatives, it’s your ability to freely and consciously choose one, either on the spot or after some deliberation.” He continues: “A practical test of free will would be this: If you were put in the same position twice — if the tape of your life could be rewound to the exact moment when you made a decision, with every circumstance leading up to that moment the same and all the molecules in the universe aligned in the same way — you could have chosen differently.”

As Jerry knows, and immediately admits in the paragraph following this quote, such a test is anything but practical. In fact, it cannot be carried out, ever. Which is why I contend that Jerry and others who push the idea that free will (and consciousness, and moral responsibility) is “an illusion” are mistaken when they think they are doing so on the basis of science. Science, if nothing else, is about empirically testable hypotheses, to which the above scenario certainly does not belong. Rather, Jerry et al. are making a metaphysical argument, an approach with which I’m fine, to a point, as a philosopher, but that is strange coming from people who clearly despise the very idea of metaphysics and scorn anything that cannot be approached by the empirical methods of science.

Knowing that his “practical test” is impossible to carry out, Jerry resorts to two lines of evidence he thinks clinch the case against free will. The first begins with the truism that we are biological organisms made of physical stuff, so that we have to abide by the laws of physics. And these laws, according to Jerry, do not leave room for free will. Of course this conclusion depends on one’s concept of free will, and there are several on offer (more on this below). It also depends on entirely unargued for assumptions, including the following: causal closure (i.e., that the currently known laws of physics encompass the totality of causal relationships in the universe); a working concept of causality (one of the most thorny philosophical concepts ever); physical determinism (which appears to be contradicted by physics itself, particularly quantum mechanics); and the non-existence of true emergent properties (i.e., of emergent behavior that actually is qualitatively novel, and doesn't simply appear to be so because of our epistemic limitations). I have opinions about all four of these points, but I don’t have a knockdown argument concerning any of them. The point is, neither does Jerry.

(Let me make clear parenthetically that I am certainly not in favor of fuzzy / mystical concepts of free will, and that I am as much of a naturalist — in the philosophical sense of the word — as Jerry. I just don’t think any of the above issues has been settled, and since it is Jerry who is making an extraordinary claim — that we are profoundly mistaken in our first person experience about free will, consciousness and morality — it seems fair to point out that he lacks the corresponding extraordinary evidence.)

Jerry’s second line of evidence for the non existence of free will draws not from physics but from neurobiology. Here he comments on recent elaborations of the famous Libet experiments about human decision making (or what cognitive scientists, and an increasing number of philosophers, refer to as volition, to get away from the theologically loaded term “free will”). Libet and others have convincingly shown that when people are asked to signal when exactly they have become aware of making the decision of pushing a button in front of a computer screen, it turns out that the decision had been made hundreds of milliseconds to several seconds before, subconsciously. That is, the brain apparently puts things in motion that will result in the pushing of a button way ahead of us becoming conscious of having made the decision to push the button.

Why this has anything at all to do with free will is a puzzle. Not even Libet himself took his experiments to show that people don’t make conscious decisions, in part because reporting awareness of an urge (in this case, of pushing a button) hardly qualifies as a conscious decision. The latter is the kind of reflective deliberation that Jerry and I engaged in while composing our respective essays, and it is simply not measured by Libet-type experiments. Indeed, it is not surprising at all that we make all sorts of unconscious decisions before we become aware of them, as any baseball batter, or anyone catching a falling object on the fly, will readily testify. Furthermore, as Alfred Mele has argued in his book on the topic, and contrary to Jerry’s take on the neurobiological literature, there is ample empirical evidence that we do engage in conscious thinking (largely catalyzed by the prefrontal cortex), as well as, and in continuous feedback loop with, our subconscious processing of information. (Incidentally, I find it strange when some people argue that “we” are not making decisions if our subconscious is operating, since presumably we all agree that our subconscious is just as defining of “us” as conscious thinking is. Accordingly, “my brain made me do it” is hardly a defense that will fly in a court of law except, and for good reasons, in pathological cases such as behaviors resulting from brain damage.)

To recap so far: I think Jerry’s position on free will is not scientific (it is a metaphysical stance), and his two “lines of evidence” are lacking because of unargued for philosophical assumptions and because of his misreading of the neurobiological literature. But just for the sake of argument let us suspend judgment on all of this and ask Jerry the obvious question: why do we have such a pervasive “illusion” to begin with? Apparently, he knew this was coming, and answered thus in the USA Today article: “where do these illusions of both will and ‘free’ will come from? We’re not sure. I suspect that they’re the products of natural selection, perhaps because our ancestors wouldn’t thrive in small, harmonious groups — the conditions under which we evolved — if they didn’t feel responsible for their actions.”

As far as I can tell there is no empirical evidence whatsoever to support such speculation. To the contrary, we know of plenty of social animal species that seem to thrive very well indeed without requiring the illusion of free will to keep them in line. Certainly social insects don’t need to be fooled that way, and it is hard to imagine even species of social mammals, including most primates, needing to engage in deliberate reasoning before deciding how to behave toward fellow group members.

Jerry cannot resist the temptation of inserting a dig at philosophers toward the end of his essay: “philosophers have concocted ingenious rationalizations for why we nevertheless have free will of a sort. It’s all based on redefining ‘free will’ to mean something else.” There are two problems with this characterization of philosophers’ modus operandi: to begin with, it’s a case of damned if you do, damned if you don’t. If philosophers didn’t inform their reasoning with the latest science they would be criticized (justly) as being stuck in medieval scholasticism. But when they do take science on board they get accused of “rationalizing.”

In the above comment Jerry also ignores that philosophers have been debating various concepts (not definitions, because they are not ex-cathedra pronouncements) of free will for a long time. Competing approaches to free will have been put forth, among others, by Plato, Aristotle, Descartes, Hume, and more recently Daniel Dennett and Harry Frankfurt, to name but a few. It is a profound mischaracterization of the history of philosophy to present various takes on free will as being simply reactive to the latest scientific discoveries. And of course some philosophical accounts of free will are more (and some less) in synch with scientific findings (which, it is worth bearing in mind, are themselves always tentative and sometimes spectacularly overturned). Nothing general about the nature of philosophy follows from that.

By the end of his USA Today essay Jerry finally gets to the crux of the matter: the implications of the alleged lack of free will for religion and morality. On the first count, Jerry claims that the death of free will spells the death of religion, although ironically he then mentions the Calvinist view of pre-determination. In fact, plenty of religious beliefs are compatible with lack of free will, so it seems like religion will survive even this assault (as befits an infinitely malleable tradition of made up stories).

Jerry’s second conclusion is that moral responsibility is therefore also an illusion, and that we should finally face up to this truth. Besides the obvious point that, according to his own view nobody has any choice about whether to face up to anything, what would this mean in practice? Jerry puts it this way: “we should continue to mete out punishments because those are environmental factors that can influence the brains of not only the criminal himself, but of other people as well. Seeing someone put in jail, or being put in jail yourself, can change you in a way that makes it less likely you’ll behave badly in the future.” And he goes on to say: “[we need to contemplate] the notion that things like consciousness, free choice, and even the idea of ‘me’ are but convincing illusions fashioned by natural selection ...  With that under our belts, we can go about building a kinder world.”

At this point I’m truly puzzled. How is it possible to argue that we “should” do X in order to achieve Y if, as Jerry’s intellectual kin, Alex Rosenberg, would put it, “the physical facts fix all the facts”? It is hard for me to make sense of a position that denies that we have any choice in any matter, while at the same time advocating that we should or should not do certain things rather than others. How can we have a choice to contemplate (or not) what Jerry is proposing? How can we then decide to build a kinder world? And since morality itself is an illusion, why should we try to build a kinder world anyway? I’m sure I’m missing something, but I would very much like to know what that something is.

In the end, skepticism about free will seems to me to be akin to radical skepticism about reality in general (the idea that all of reality is an illusion, or a computer simulation, or something along those lines): it denies what we all think is self-evident, it cannot be defeated logically (though it is not based on empirical evidence), and it is completely irrelevant to our lives. If it teaches us anything, it is to humble us into contemplating the possibility that we may know (in the case of radical skepticism) or be able to act (in the case of free will skepticism) much less than we often smugly think — and we can all use an occasional lesson in humility. That said, we should then proceed by ignoring the radical skeptic in order to get back to the business of navigating reality, making willful decisions about our lives (including New Year’s resolutions, which actually succeed surprisingly often), and assign moral responsibility to our and other people’s actions.

Friday, January 13, 2012

Michael's Picks

by Michael De Dora

* State lawmakers set a record in 2011 for the most anti-reproductive rights provisions enacted in a single year, according to a new report from the Guttmacher Institute. Laura Bassett has more on this story.

* Should Christians be exempted from basic educational and professional standards because of their deeply held religious beliefs? That’s the question taken up in a recent article by David Moshman, professor of educational psychology at the University of Nebraska-Lincoln.

* James Croft says that secularists could be more effective in defending church-state separation if they instead framed such issues as a matter of church-state protection.

* Roman Catholic bishops in Illinois charge that being forced to follow the law while working with taxpayer money limits religious freedom. Susan Jacoby has more on this argument, which is unfortunately becoming popular.

* Many people think that moral beliefs and values cannot or should not be promoted or discredited by the government. Yet Anthony Sheldon, writing in The Guardian, argues that this notion is mistaken.

* Wendy Kaminer discusses a troubling animal rights law that could serve to protect commercial interests and make terrorists out of people who want to voice their concerns.

* Neuroeconomist Paul Zak has gotten a lot of attention for his just-released TED talk, titled “Trust, morality, and oxytocin.” While you can watch the 16-minute lecture here, CNN has now published a short article by Zak on that topic.

* Women in Egypt are fighting back against a vigilante group of ultra-conservative Muslim men who has been harassing retail shops and their customers for “indecency,” according to the news outlet Bikya Masar.

Wednesday, January 11, 2012

On parapsychology

by Massimo Pigliucci

A few weeks ago I published at Rationally Speaking a guest post by my former (undergraduate) student Maaneli Derakhshani, who made a case for the scientific status of parapsychology. Some of my readers criticized the choice as an instance of allowing pseudoscience to be represented in what I hope is a reputable science and philosophy blog. That sentiment is, I think, misguided. If we really pride ourselves on our critical thinking we ought to be able to take other people’s best arguments on board and show if and why they are mistaken. And Maaneli did make a very good argument in defense of parapsychology.

I also think Maaneli, in response to some of the many comments posted, is correct in saying that it took courage for him to “come out” in this manner. As I understand it, he is hoping for a scientific career in theoretical physics, and he rightly argues that writing favorably on behalf of parapsychology is not going to help his chances. I know that I would not hire in my department someone with leanings toward what I consider to be a pseudoscience.

But is parapsychology a pseudoscience, as Maaneli chides me for having opined both in podcasts and in my Nonsense on Stilts book? He thinks not, based on his analysis of a small but persistent literature concerning experiments performed under so-called "psi-conducive" conditions, like the Maimonides dream telepathy and the Ganzfeld (“total field”) experiments.

I am sure I will disappoint both Maaneli and some of my readers who were actually supportive of his analysis, but I will not engage his claims one by one. This is, I assure you, not a cop out, but a reasonable decision based on three considerations. First, other critics of the paranormal have done a much better and more in-depth job at criticizing the experiments produced by Daryl Bem and others (for the most recent example, see this devastating critique of Bem's porn-facilitated clairvoyance experiments, published in Skeptical Inquirer. Additional critical sources are listed at the end of this post). Second, neither Maaneli nor I have access to the raw data or have been in a position to double check in person (or at least try to duplicate) the experimental protocols under discussion. Without that, we are both reduced to trusting the analyses (or in my case, the criticisms) done by others. Third, the question asked by Maaneli is whether and why current parapsychology qualifies as science or should be relegated to pseudoscience, and this issue is broader and more interesting than endless skirmishes about p-values and meta-analyses. I will therefore focus this post solely on Maaneli’s fundamental question.

One argument made by supporters of parapsychology is that there is by now sufficient evidence to accord the discipline scientific status (and, presumably, academic status and funding to its practitioners) because the quality of the marshaled evidence is at least as good as run of the mill evidence published in mainstream psychology journals. Besides the fact that there actually is much reasonable doubt that the latter assertion corresponds to the truth, the argument fails for two reasons. First, one could respond that at best this shows that a lot of psychology is sloppy science. As a formerly practicing scientist (in evolutionary biology) I can assure you that quite a bit of below par science is done (and, unfortunately, published) all the time. An embarrassing number of papers in mainstream science is based on bad experimental procedures, presents woefully inadequate and biased samples, and reports flawed statistical analyses. The reason this isn’t a bigger deal (although it probably got tenure for a number of people who should have dropped out of science in graduate school) is that scientific peer review is an endless process that eventually sifts the few nuggets of gold and simply ignores the sea of useless crap.

Second, as every skeptic knows very well, Hume's dictum reigns (though often in Carl Sagan's paraphrase): extraordinary claims require extraordinary evidence. This is true of normal science as well. Nobody bothers to replicate or even critically re-analyze boring results that simply confirm what we already know, only under slightly different circumstances. But try to claim cold fusion, or faster than light particles, and suddenly the standards of proof become much much higher. And so they should, as unfair as the individual scientist may feel about this epistemic heuristic.

Now, parapsychological claims of telepathy, clairvoyance and the like are just about as extraordinary as they come. One can reasonably argue that if confirmed, these claims would overturn physics and biology as we understand them, possibly violating several fundamental laws (vague nods to “quantum entanglement” effects notwithstanding). That being the case, Bem and colleagues simply have to do a hell of a lot better than they have done so far, and my bet is that they simply won’t be able to.

By the way, this isn't a question of effect size, for which several readers have erroneously hammered Maaneli. There are plenty of examples in science, from ecology to quantum physics, where the effect size is very small indeed. But the results are statistically clear, methodologically unimpeachable, and repeatable ad nauseam by a large community of scientists. None of the above is the case for anything that parapsychology has produced so far.

What about the lack of a sound theory? Again, Maaneli is correct in citing examples from the history of science when we didn’t have a theoretical explanation for certain observations, and yet the scientific community has eventually accepted the results and incorporated them into mainstream science. But a closer look at some of these examples is very instructive. Take Wegener’s idea of continental drift, which turned out to be correct, and which was based on initially already strong evidence (much better evidence, I submit, than anything produced in parapsychology). Still, the idea was not accepted immediately, and it took among other things the development of a sound theory to explain the phenomenon before geologists came on board en masse. And so it should be, since science isn’t just a collection of facts, odd or not, it is an attempt to understand those facts and how they fit into everything else we know about how the universe works.

Parapsychology has had more than a century to produce compelling facts and reasonable theories. It has fallen very short on the first count, and embarrassingly so on the second one. Nobody seems to have any idea of what “psi” is and why it works in the way in which it allegedly works. And nobody seems to have any clue at all concerning how “psi” might fit with everything else that psychology, physiology, neurobiology, evolutionary biology, chemistry and physics tell us about human cognitive abilities (again, vague references to quantum mechanics won’t do, despite how easily Deepak Chopra can make them). This is a really tall mountain for parapsychologists to climb, and they seem stuck on the first or second step.

Which brings me to why parapsychology is best thought of as pseudoscience. Karl Popper famously thought that the so-called demarcation problem, finding something that distinguishes science from pseudoscience, had been solved by his own criterion of falsifiability. Not so in modern philosophy of science. Maarten Boudry and I, as I have mentioned before, are finishing up the editing of a new book on the demarcation problem, to be published by Chicago Press. One of the things we learned from the many contributors to the volume is that nobody any longer thinks of science (or pseudoscience) as a simple concept that is amenable to a sharp definition based on a small set of necessary and jointly sufficient conditions. But we have also learned that many philosophers think that a hallmark of pseudoscience is the persistence of its practitioners to make grand and revolutionary claims in spite of the equally persisting dearth of compelling evidence (and theory) to back them up. This scenario, I think, fits the situation of parapsychology very well.

However, Maarten and I have also seen many of our colleagues argue — again, correctly, we think — that a pseudoscientific status is historically contingent. That is, something may be confined to pseudoscience for a long time and then emerge triumphantly, or vice versa, may be considered good science for a while, only to be eventually relegated to pseudoscience. Phrenology is an example of the latter, evolutionary biology one of the former (I know, surprising, but see the essay by Michael Ruse that I discussed at RS in the context of a recent book on biology and ideology).

So the verdict is pretty much never final, and good skeptics really ought to keep an open mind. But the more time that passes without significant and widely acknowledged progress, the more one’s skepticism is reasonable and warranted. So here is my suggestion to Maaneli: either shelve this whole thing and concentrate on your budding career as a theoretical physicist, or get your hands dirty and work to produce the kind of evidence (and theory) that really has the potential to shock the scientific world into paying attention. If you don’t mind the advise, however, my bet is that you’ll be far better off taking the first route.


Additional critical sources:

Blackmore, S. J. (1980) The extent of selective reporting of ESP ganzfeld studies. European Journal of Parapsychology 3:213–220.

Blackmore, Susan (1987) A Report of a Visit to Carl Sargent’s Laboratory. Journal of the Society for Psychical Research. 54:186-198.

Frazier, Kendrick (ed.) (1986) Science Confronts the Paranormal. Prometheus Books.

Hansen, G.P., Utts, J., and Markwick, B. (1992) Critique of the PEAR remote-viewing experiments. Journal of Parapsychology. 56:97-113.

Hansel, C.E.M. (1989) The Search for Psychic Power: ESP and Parapsychology Revisited. Prometheus Books.

Marks, David (2000) The Psychology of the Psychic. Prometheus Books.

Milton, J. and R.Wiseman (1999) Does psi exist? Lack of replication of an anomalous process of information transfer. Psychological Bulletin 125:387-391.

Tuesday, January 10, 2012

New Rationally Speaking podcast: Joseph Heath on Economics Without Illusions

Guest Joseph Heath, author of “Economics Without Illusions: Debunking the Myths of Modern Capitalism,” joins us as we turn our skeptical eyes toward the treacherous dual terrain of economics and politics. We discuss the ways in which, with his book, he attempts to raise our economic literacy and empower us with new ideas. In it, he draws on everyday examples to skewer the six favorite economic fallacies of the right, followed by impaling the six favorite fallacies of the left. Heath leaves no sacred cows untipped as he breaks down complex arguments and shows how the world really works.

Joseph Heath is the Director of the Centre for Ethics and Professor of Philosophy and Public Policy at the University of Toronto. In addition to his academic publications, he is the author of other popular books, among them, "The Rebel Sell : Why the Culture Can't Be Jammed" and "Efficient Society: Why Canada is as Close to Utopia as It Gets"

Monday, January 09, 2012

Rationally Speaking encore: How to change a mind

[Originally published on December 2, 2005]

by Massimo Pigliucci

Harvard psychologist Howard Gardner's Changing Minds deals with that fundamental aspect of the human condition: our willingness (or, more often, unwillingness) to change mind about an issue. As somebody who is a professional educator and spends an inordinate amount of time keeping a blog, I'm keenly interested in Gardner's book. While not earth shattering, Changing Minds provides a series of interesting insights, presented in very readable prose.

Gardner's idea is to examine mind changing at different "scales," from the level of political leaders having to convince a whole nation, to university presidents intent on selling radical reforms to colleagues and students, to the more intimate settings of conversations with friends and loved ones, and finally to changing our own mind. As Gardner points out, these situations require different approaches and display distinct dynamics, chiefly because of the nature of the interaction between the parties.

The basic premise of Gardner's book, however, applies to all levels of analysis: there are specific, recognizable elements that play a role in any successful change of mind. Irritatingly (though Gardner seems to think this is actually a plus), all keywords used in this context begin with "r," which makes it very difficult to r-emember them. Anyhow, here they are:

* Reason: if one wishes to change someone else's mind one has to provide good reasons, obviously. But if that were enough, we wouldn't have creationism around, so read on.

* Research: the best arguments are those complemented by evidence, so presenting data to back one's position up is crucial. (Again, insufficient against pseudoscience and in politics, but still...)

* Resonance: the new view has to resonate psychologically at some level with the intended recipient. This is were things become tricky, because we are moving outside of pure rationalism or empiricism, and into the psychology of human motivations.

* Redescription: a new viewpoint is more likely to be accepted if it is presented in a variety of forms, possibly by a variety of sources. That is why, for example, public education needs to be done on many fronts and by a number of individuals -- the more people trying to communicate the message in different ways, the more likely that it will sink in.

* Rewards: this is the classical behaviorist call for positive reinforcement. A new point of view is more likely to be accepted if one sees some advantages (not necessarily material) to adopting it.

* Real world events: these are external events, usually of large emotional impact, that can reinforce the new point of view. Typically these aren't under the control of either the recipient or the educator (e.g., the 9/11 attacks on the World Trade Center), but can be powerful in forcing the recipient to reach a "tipping point" and changing her mind.

* Resistances: an effective change of mind happens when most or all of the above are in place, and when there are few sources of resistance to the change, where this resistance can be rooted in material rewards, deep psychological grounds, or just simple inertia.

Of course, Gardner knows -- as Machiavelli masterfully articulated before him -- that all of this is value-neutral. That is, one can use the 7-R framework for good or for bad (indeed, Gardner's book includes clear examples of both), which opens up the Pandora box of the ethical use of education. But that's another story for another time.

Saturday, January 07, 2012

Rationally Speaking encore: Does empathy negate physicalism?

[Originally published on November 1, 2005]

by Massimo Pigliucci

Tough question. It has been posed (and answered in the positive) by Michael Philips in a recent article in Philosophy Now. Let's see what this is about. Empathy, of course, is the ability that all normal human beings (there are some pathological exceptions, which are actually going to be very relevant in a minute) have of, in some sense, being in someone else's metaphorical shoes. Empathy, in other words, is that mental phenomenon that allows us to at least approximately feel the pain, or pleasure, being experienced by someone else, which in turn allows an understanding of other people's emotional situations.

Physicalism, on the other hand, is a philosophical term that indicates a family of theories about the mind-body relationship (for a rather technical summary see here). In particular, physicalism says that the mind in fact is a result of brain activity, excluding the possibility of any form of mind-body dualism. There are several versions of physicalism, but two major ones are the so-called "type identity" and "token identity" theories. Bear with me for a second, this is going to be interesting once we pass the technicalities.

A physicalist identity theory basically says that there is some correspondence between physical and mental states, i.e. that in order to have a given mental state (say, feeling pain) one has to be in a certain brain state, because the brain is the causal factor behind so-called mental events. If one subscribes to a token identity theory, then one is saying that any particular mental state corresponds to (it's identical with) a specific brain state. Only that brain state will cause that particular sensation or feeling. On the other hand, the more flexible type identity theory says that there is in fact a correspondence between brain states and feelings, but that this may be a many-to-one relationship, i.e. there may be several different configurations of a brain (or equivalent structure) that can generate a certain sensation in the subject. Keep this distinction in mind, it will be useful in a bit.

Philips, and other philosophers of mind, argues that physicalism is incompatible with the existence of empathy, because empathy implies the existence of qualia, and qualia cannot be accounted for by physicalism. Yup, we need to take care of another little bit of technical jargon. Qualia are so-called "secondary" properties of objects. Primary properties are independent of observers, for example shape. A box is a box regardless of who observes it, human or machine. Secondary qualities, however, are in some sense "in" the observer, for example in the case of colors. Yes, colors are elicited by the physical characteristics of light waves, but the experience of seeing a color (qualia are experiences) demands the subjective presence of a conscious being actually having the experience. (One can already object to this that, in fact, plenty of living beings -- for example insects -- experience colors in a physiological sense, and yet are not conscious in anything like the sense of the term when applied to human beings, but let that pass for now.)

Next, to the crux of the matter. Philips argues that empathy is made possible by qualia, because empathy is about feeling that we can experience something very much like what somebody else says she is experiencing (e.g. pain in response to a hammer hitting a finger). But how do we know what it's like to experience, say, pain? It's not because of a physicalist description of pain as a function of brain processes, but rather because we have the capability to experience qualia ourselves. In other words, the argument goes, physicalism may be able to tell us what sort of nerves and nerve impulses are involved in the feeling of pain, but that has nothing to do with the subjective experience of pain. So, physicalism cannot explain qualia; but since qualia are real (as demonstrated by the existence of empathy), then physicalism cannot account for a real (and important) mental phenomenon. Ergo, physicalism must be wrong, or at least grossly incomplete.

Philips' article goes into some detail into the possible responses open to a physicalist, and offers of course a series of counter-rebuttals by Philips. The problem is that one of the fundamental (and unspoken) premises of Philips' whole critique is highly questionable. It turns out that his arguments are pretty good against what I referred to above as "token identity" theory, i.e. the strictest variety of physicalism that claims that there is a one-to-one correspondence between brain and mental states. If that were the case, one could argue that a complete knowledge of brain circuitry would have to be sufficient to account for all mental phenomena, including qualia. But it turns out that subjective experiences are in fact difficult to pinpoint on a specific set of nerves and impulses. This isn't really surprising, because we already know that token identity theories must be wrong. It seems clear that different individuals, with different brains, can have apparently very similar qualitative experiences (such as perceiving colors).

But things get a lot more complicated when one moves to the more sophisticated type identity theory. In this case, the claim is simply that there are classes of brain structures and functions (e.g., nerves and nerve impulses) that can generate mental phenomena. But the same mental phenomena could be generated by different structures and functions, even by entirely different materials (which makes artificial intelligence possible, at least in theory), as long as certain properties are maintained by the system. Think of it as the idea that different types of hardware can run the same sort of software with relevantly similar (though not necessarily identical) results. While if token identity were correct there would be only one way to produce a word processor that looks and works like Microsoft Word, with type identity once can run different pieces of software (e.g., Word, OpenOffice, etc.) on different machines (PCs, Apples) and different operating systems (Windows, Linux), and pretty much get the same "qualia" (i.e., the same user interface) from all of them. If that's the case, type identity is compatible with the existence of empathy.

Finally, remember my initial reference to the fact that normal human beings can feel empathy? It turns out that some brain pathologies, such as the destruction of the amygdala, make it impossible for a human to feel empathy, because he himself has lost the ability to have emotions altogether. This and similar nightmarish conditions are described in a wonderful book on the human brain, Phantoms in the Brain, by neurobiologist V.S. Ramachandran. What these findings imply, however, is a pretty powerful blow to non-physicalist theories of emotions and feelings: if qualia aren't the result of the activity of certain brain regions (such as the amygdalas), why on earth would people with damage to those regions not be able to experience qualia? This objection is sometimes referred to in philosophy of mind as the "no ectoplasm" clause: we may not know exactly how the brain produces consciousness, but no brain = no consciousness, precisely as a physicalist theory would predict.

Something to ponder, the next time you'll look at the colors of a beautiful sunset...

Thursday, January 05, 2012

Rationally Speaking encore: Why bother?

[Originally published on October 31, 2005]

by Massimo Pigliucci

A recent comment on this blog asked the question of why bother having a discourse with people who disagree with you ideologically. This question often comes up as a result of frustration at interacting with people who apparently aren't interested in a dialog, but simply in shouting their opinions past others. Of course, to some extent we are all guilty of this, but the extent does matter, and the intentions with which one enters a public forum (be that a blog, a radio show, or simply a conversation at dinner) matters too.

I have actually written about this before (for example, here), arguing that there are different time-horizons and goals that need to be considered. In the short-term, it is simply not true that our opinions do not influence others, and sometimes even change minds. We rarely get to know this, because the process doesn't have an instantaneous feedback, and the most vocal people in any particular forum tend to be those who are most set in their ideas (which isn't to say that they are necessarily wrong, of course!). But since I began doing my part as what in Europe would be considered a "public intellectual" (i.e., not somebody who stays way up there in the ivory tower, engaging in continuous mental masturbation), about ten years ago, I have gotten plenty of letters and emails from people thanking me for having pointed out things they hadn't thought before, in the process adding to their daily dose of food for thought. That was precisely the point.

In the long run, things do change too, and often dramatically. It may be disheartening to see the rise of the intolerant religious right in the United States during the past few years, but take the really long historical view and you'll immediately appreciate the enormous advances made during the last century (think of the right of women to vote, civil rights legislation, etc.), and beyond (not long ago I would have been burned as an heretic for what I'm writing on this blog).

Where, then does the frustration come from? I suggested in the past that this is the result of what I called the "rationalistic fallacy." This isn't a formal logical fallacy, but rather an assumption -- particularly common among, but not limited to, academics, that all one needs to do to convince other people is to present a cogent argument backed up by evidence. Alas, it isn't that simple, partly because the human brain seems to be hard-wired to jump to conclusions based on little evidence, not to mention of course because of the emotional component attached to much of what is being discussed here (religion, rights, philosophical positions, etc.).

Nonetheless, there is equally good evidence from the cognitive sciences that people do change their minds (I highly recommend a little booklet entitled "Teaching with the Brain in Mind"). How this happens is interesting, and worth learning. For example, people tend to be more responsive to repeated exposure to new ideas, preferably in a variety of settings (lectures, readings) and sources (i.e., various authors, colleagues, friends). Few of us change our minds on the spot or in response to a single well-crafted argument presented by one person. We need to see things from different angles, hear or read them repeated with different flavors, and to give time to our left brain (what neuroscientists call the "interpreter" of our worldview) to digest whatever dissonant information is being presented.

There is one more reason to engage in open discussion: one exposes one's own arguments to the sometime penetrating "peer review" of other people, who may start with different assumptions, reason in a different fashion, and hold onto different sets of priorities. That can do miracles to sharpen our own thinking and make us grow as individuals.

The bottom line is that discussions aren't a waste of time, as frustrating as they sometimes may be. They are an essential component of an open, democratic society, and they beat the crap out of watching mindless tv all night...