About Rationally Speaking


Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.

Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Friday, March 29, 2013

The meanings of the meaning of life


by Massimo Pigliucci

I just finished reading the excellent collection Philosophy and the Hitchhiker's Guide to the Galaxy, edited by Nicholas Joll, a must for anyone who has ever been captivated by Douglas Adams’ comic genius and its scientific and philosophical undertones. Here I am going to briefly comment on a single table that appears in the last essay of the volume, “The funniest of all improbable worlds — Hitchhiker’s as philosophical satire,” by Alexander Pawlak and Joll himself. It’s a table about several potential meanings of the phrase “the meaning of life” and how they are related to each other.

Of course, a major feature of the plot of the Guide is precisely our heroes’ quest for the answer to the ultimate question of life, the universe and everything. The answer turns out to be “42,” at least according to Deep Thought, a supercomputer constructed by an alien race for the sole purpose of answering said question. When the somewhat disappointed builders of Deep Thought asked what sense should they make of such a superficially meaningless and preposterously simple answer, they were told that the real quest had just begun. You see, the big prize is not, as so many had assumed, the answer to the ultimate question of life, the universe, and everything. The real deal is to find out the question that made sense of the answer, 42. But even Deep Thought did not have the computational power to uncover the fundamental question, so a much bigger computer, running for much longer, should be built to accomplish the new task. That computer eventually became known to human beings as “Earth,” and it was destroyed just five minutes before it achieved its objective, for the mundane purpose of building an interstellar bypass to ease local traffic (the plans to do so, and the forms to complain about, had been locked in a basement on Alpha Centauri for 50 years). If you want to know the rest of the story, you better get going reading The Hitchhiker’s Guide, The Restaurant at the End of the Universe, Mostly Harmless, Life, the Universe and Everything, and So Long, and Thanks for All the Fish, which all together comprise the standard Adams canon in this respect.

But back to Pawlak and Joll’s essay in the volume exploring the philosophical underpinnings of the Guide. The authors set out to explore the possible meanings of the above mentioned ultimate question of life, the universe and everything (henceforth, UQLU&E), together with some of the answers that philosophers and scientists have come up with so far.

To begin with, according to Pawlak and Joll, UQLU&E could mean that one is interested in life’s character. This could be that of a comedy, a tragedy, or an unintelligible farce; or it could be about suffering, or struggle; or perhaps the character of life is just whatever you make of it. Needless to say, my strong intuition is that the character of existence is whatever we make of it, because there is no independent intelligent agency that might have set things in motion for any particular reason (I do occasionally entertain the so-called simulation hypothesis, which would entail a different answer, but I guess I don’t take such an alternative seriously enough for sustained consideration — at least not without a couple of martinis).

Naturally, if one is a religious believer of some sort one also thinks that the character of life is likely to be one of the others mentioned by Pawlak and Joll, depending on your taste in matters of gods and the supernatural (if you are Christian, you may go for suffering; if a stoic perhaps for struggle; if an Ancient Greek comedian,  for comedy, and if a tragedian, for tragedy). The point is that the sort of answer you pick for the character of existence, following Pawlak and Joll’s reasoning, is entailed by a particular choice for the second meaning of the question: life’s cause.

Choices on offer here include god(s), some combination of scientific explanation (Big Bang followed by Darwinian evolution — Pawlak and Joll here seem for some reason to think that these two are independent alternatives, but they are clearly not), or “something else.” It is hard to imagine what a third alternative might look like (again, except for the Tron-like scenario offered up by the simulation hypothesis!), so we really have just two competitors — though they do come in a number of possible flavors: supernaturalistic or scientific explanation. Again, it will come as no surprise to readers of this blog that I think this is another slam dunk, in favor of the latter possibility. This is for a number of reasons, but the fundamental ones include: a) a “supernatural explanation” is really an oxymoron, as invoking supernatural forces explains precisely nothing; b) there is no evidential or conceptual reason on earth why anyone should take the existence of gods seriously; and c) we do have a number of very good, if always incomplete and revisable, scientific accounts of the causes of the universe.

Which brings us to Pawlak and Joll’s third meaning of the UQLU&E: the purpose of life. The link the authors suggest is that the cause (second meaning) is explained by the purpose, but I actually think they've  gotten things exactly backwards here: once we agree on a most likely cause for life, the universe and everything, then we can reason about the possible options concerning its purpose. These options include some sort of assignment by a higher being, a type of purpose that can be found or discovered by us, or a purpose that can be made up or constructed by us. Notice that these three possibilities really are containers of sorts, each representing a family of possible answers. For instance, even if we agree that the cause of the universe is the creative act of a god, and that that implies a particular purpose for us that was present in the mind of that god when it created the world, it doesn’t follow that the purpose in question is of any particular type. Depending on the (unknown, and likely unknowable) character of said god, our purpose could vary from mere entertainment to the fulfillment of a cosmically narcissistic desire for attention. (Similarly, if the simulation hypothesis is correct, we may turn out to exist for the programmers’ entertainment, or perhaps to satisfy their scientific-philosophical curiosity about what happens in different “possible worlds.”)

My preferred answer here is, not surprisingly, that we make up the purpose of life as we go, and that we have a (not unlimited) number of options. More specifically, I think that a good way to think about the purpose of one’s life is within the virtue ethical framework first established by Aristotle and other Ancient Greek philosophers: that purpose is to live a eudaimonic, i.e., a morally right flourishing existence. Other options provided by other philosophies include, of course, the existentialist idea of living an authentic life, the stoic discovery of the distinction between what one can and cannot do, the Epicurean quest for ataraxia (similar to the Buddhist one for Enlightenment) and so on. The issue of the purpose of existence is an excellent reason to study philosophy, just like the issue of the cause of our existence is a splendid reason to study science.

Finally, Pawlak and Joll bring us to the fourth interpretation of the UQLU&E: what is life’s import, i.e., what should one do with one’s life? They vaguely say that this latter sense of the UQLU&E is related to the other three, because those three have “some relevance” to the fourth one. But I think the relationship is actually more specific than that: the issue of the import of life follows directly from the issue of the meaning of life.

Pawlak and Joll here provide a panoply of choices to their readers. Perhaps the import of life is that we should not bother to do anything (Camus’ famous contention that suicide is the most important question in philosophy comes to mind), or we should just live and let live (not the most awful advice, really), or strive to minimize suffering, or to create beautiful things; or perhaps we should think of life itself as a work of art, to labor on throughout our existence; or maybe we should concentrate on increasing our knowledge, or striving to achieve “oneness” with all things (whatever that means), or finally to “do what thou wilt, and that is the whole of the law.”

Once again, this is the sort of quest for which philosophy will equip you well. Indeed, you may have recognized a number of philosophical precepts in the above list: some sense of becoming one with all things is a major goal of Buddhism and other mystical approaches; to minimize suffering is one of the laudable goals of a number of religious traditions; to treat your life as a work in progress, as well as to use it to increase your knowledge is the eudaimonic ideal mentioned above. The point is that the answer to the question of purpose is a matter of one’s theoretical philosophy, while the issue of the import is best treated as one of practical philosophy, and the two are obviously intimately connected.

The nice table that Pawlak and Joll have put together may also serve to illustrate one of my recurring interests on this blog: the exploration of the nature of the relationship between science and philosophy. I have said above that the cause aspect of the UQLU&E is best dealt with by science, while discussions of both purpose and import are more clearly philosophical in nature. Notice, then, that the availability of a sound scientific account of the causes of the universe does favor certain philosophical approaches to purpose and import and disfavor others. But the scientific answers strongly underdetermine the philosophical options on offer. That is, if we agree that the universe came about because of the Big Bang, and that human life is the result of a process of Darwinian evolution, we can exclude some options under purpose and import, but we are still left with pretty much no guidance on the remaining alternatives. Does the choice of a eudaimonic life follow from the Big Bang? Clearly not. Is a quest to minimize suffering, or to become one with all things logically entailed by Darwinian evolution? Again, not at all. So the scientific answers pertinent to some aspects of the UQLU&E constrain, but by no means pinpoint, the philosophical answers, reflecting what I think is a general picture of the relationship between the two disciplines.

What, then, is the status of the first of Pawlak and Joll’s considered meanings of the question of meaning, the one concerning character? As we have seen, they suggest that life’s character might be explained by the causes of life, and I think they are correct. Since I prefer the scientific causal explanation, I am left with only the option that the character of life is whatever we make of it. But that, in turn, is a philosophically broad container which, again, is underdetermined by the underlying scientific answer, thereby again fitting the general scheme just proposed. As Douglas Adams would say, so long, and thanks for all the fish.

Wednesday, October 10, 2012

Rationally Speaking podcast: On Science Fiction and Philosophy


By its very nature, science fiction has always been particularly suited to philosophical exploration. In fact, some of the best science fiction novels, short stories, movies, and TV shows function like extended philosophical thought experiments: what might cloning tell us about our views on personal identity?

If we could all take a pill to be happy, would we want to do that? In this episode, Massimo and Julia recall some of their favorite philosophically-rich science fiction, and debate the potential pitfalls in using science fiction to reach philosophical conclusions.

Julia's pick: "3 Worlds Collide"
Massimo's pick: "Four kinds of philosophical people"

References:
The Philosophical Roots of Science Fiction
Science Fiction and Philosophy: From Time Travel to Superintelligence

Thursday, August 09, 2012

The Community of Reason, a self-assessment and a manifesto


media.photobucket.com
by Massimo Pigliucci

I have been an active member of the self-described Community of Reason since about 1997. By that term I mean the broad set encompassing skeptics, atheists and secular humanists (and all the assorted synonyms thereof: freethinkers, rationalists, and even brights). The date is easily explainable: in 1996 I had moved from Brown University — where I did my postdoc — to the University of Tennessee, were I was appointed assistant professor in the Departments of Botany and of Ecology & Evolutionary Biology. A few months after my arrival in Knoxville, the extremely (to this day) unenlightened TN legislature began discussing a bill that would have allowed school boards to fire teachers who presented evolution as a fact rather than a theory (it is both, of course). The bill died in committee (though a more recent one did pass, go Volunteers!), in part because of the efforts of colleagues and graduate students throughout the State.

It was because of my local visibility during that episode (and then shortly thereafter because I began organizing Darwin Day events on campus, which are still going strong) that I was approached by some members of a group called “The Fellowship of Reason” (now the Rationalists of East Tennessee). They told me that we had much in common, and wouldn’t I want to join them in their efforts? My first thought was that an outlet with that name must be run by cuckoos, and at any rate I had a lab to take care of and tenure to think about, thank you very much.

But in fact it took only a couple more polite attempts on their part before I joined the group and, by proxy, the broader Community of Reason (henceforth, CoR). It has been one of the most meaningful and exhilarating decisions of my life, some consequences of which include four books on science and philosophy for the general public (counting the one coming out in September); columns and articles for Skeptic, Skeptical Inquirer, Free Inquiry, The Philosopher’s Magazine and Philosophy Now, among others; and of course this blog and its associated podcast. I made many friends within the CoR, beginning with Carl Ledendecker of Knoxville, TN (the guy who originally approached me about the Fellowship of Reason), and of course including the editor and writers of Rationally Speaking.

But... yes, there is a “but,” and it’s beginning to loom large in my consciousness, so I need to get it out there and discuss it (this blog is just as much a way for me to clarify my own ideas through writing and the feedback of others as it is a channel for outreach as an academic interested in making some difference in the world). The problem is that my experience (anecdotal, yes, but ample and varied) has been that there is quite a bit of un-reason within the CoR. This takes the form of more or less widespread belief in scientific, philosophical and political notions that don’t make much more sense than the sort of notions we — within the community — are happy to harshly criticize in others. Yes, you might object, but that’s just part of being human, pretty much every group of human beings holds to unreasonable beliefs, why are you so surprised or worried? Well, because we think of ourselves — proudly! — as a community of reason, where reason and evidence are held as the ultimate arbiters of any meaningful dispute. To find out that too often this turns out not to be the case is a little bit like discovering that moral philosophers aren’t more ethical than the average guy (true).

What am I talking about? Here is a (surely incomplete, and I’m even more sure, somewhat debatable) list of bizarre beliefs I have encountered among fellow skeptics-atheists-humanists. No, I will not name names because this is about ideas, not individuals (but heck, you know who you are...). The list, incidentally, features topics in no particular order, and it would surely be nice if a sociology student were to conduct a systematic research on this for a thesis...

* Assorted nonsense about alternative medicine. Despite excellent efforts devoted to debunking “alternative” medicine claims, some atheists especially actually endorse all sorts of nonsense about “non-Western” remedies.

* Religion is not a proper area of application for skepticism, according to some skeptics. Why on earth not? It may not be a suitable area of inquiry for science, but skepticism — in the sense of generally applied critical thinking — draws on more than just science (think philosophy, logic and math).

* Philosophy is useless armchair speculation. So is math. And logic. And all theoretical science.

* The notion of anthropogenic global warming has not been scientifically established, something loudly proclaimed by people who — to the best of my knowledge — are not atmospheric physicists and do not understand anything about the complex data analysis and modeling that goes into climate change research.

* Science can answer moral questions. No, science can inform moral questions, but moral reasoning is a form of philosophical reasoning. The is/ought divide may not be absolute, but it is there nonetheless.

* Science has established that there is no consciousness or free will (and therefore no moral responsibility). No, it hasn’t, as serious cognitive scientists freely admit. Notice that I am not talking about the possibility that science has something meaningful to say about these topics (it certainly does when it comes to consciousness, and to some extent concerning free will, if we re-conceptualize the latter as the human ability of making decisions). I am talking about the dismissal-cum-certainty attitude that so many in the CoR have so quickly arrived at, despite what can be charitably characterized as a superficial understanding of the issue.

* Determinism has been established by science. Again, wrong, not only because there are interpretations of quantum mechanics that are not deterministic, but because a good argument can be made that that is simply not the sort of thing science can establish (nor can anything else, which is why I think the most reasonable position in this case is simple agnosticism).

* Evolutionary psychology is on epistemic par with evolutionary biology. No, it isn’t, for very good and well understood reasons pertinent to the specific practical limitations of trying to figure out human selective histories. Of course, evopsych is not a pseudoscience, and it’s probably best understood as a science-informed narrative about the human condition.

* The Singularity is near! I have just devoted a full column for Skeptical Inquirer (in press) to why I think this amounts to little more than a cult for nerds. But it is a disturbingly popular cult within the CoR.

* Objectivism is (the most rational) philosophy according to a significant sub-set of skeptics and atheists (not humanists, since humanism is at complete odds with Randianism). Seriously, people? Notice that I am not talking about libertarianism here, which is a position that I find philosophically problematic and ethically worrisome, but is at least debatable. Ayn Rand’s notions, on the other hand, are an incoherent jumble of contradictions and plagiarism from actual thinkers. Get over it.

* Feminism is a form of unnecessary and oppressive liberal political correctness. Oh please, and yet, rather shockingly, I have heard this “opinion” from several fellow CoRers.

* Feminists are right by default and every attempt to question them is the result of oppressive male chauvinism (even when done by women). These are people who clearly are not up on readings in actual feminism (did you know that there have been several waves of it? With which do you best connect?).

* All religious education is child abuse, period. This is a really bizarre notion, I think. Not only does it turn 90% of the planet into child abusers, but people “thinking” (I use the term loosely) along these lines don’t seem to have considered exactly what religious education might mean (there is a huge variety of it), or — for that matter — why a secular education wouldn’t be open to the same charge, if done as indoctrination (and if it isn’t, are you really positive that there are no religious families out there who teach doubt? You’d be surprised!).

* Insulting people, including our close allies, is an acceptable and widespread form of communication with others. Notice that I am not talking about the occasional insult hurled at your opponent, since there everyone is likely a culprit from time to time (including yours truly). I am talking about engaging in apologia on behalf of a culture of insults.

The point of this list, I hasten to say, is not that the opinions that I have expressed on these topics are necessarily correct, but rather that a good number of people in the CoR, including several leaders of the movement(s), either hold to clearly unreasonable opinions on said topics, or cannot even engage in a discussion about the opinions they do hold, dismissing any dissenting voice as crazy or irrelevant.

As you can see, the above is a heterogeneous list that includes scientific notions, philosophical concepts, and political positions. What do the elements of this list have in common, if anything? A few things, which is where I hope the discussion is going to focus (as opposed to attempting to debunk one’s pet entry, or deny that there is a problem to begin with).

A) Anti-intellectualism. This is an attitude of lack of respect for the life of the mind and those who practice it. It may be strange to claim that members — and even some leaders — of the CoR engage in anti-intellectualism, but the evidence is overwhelming. When noted biologists or physicists in the movement dismiss an entire field of intellectual pursuit (philosophy) out of hand they are behaving in an anti-intellectual manner. When professional “skeptics” tell us that they don’t buy claims of anthropogenic global warming, they are being anti-intellectual because they are dismissing the work of thousands of qualified scientists. To be more precise here, I think there are actually two separate sub-issues at play:
A1) Scientism. This is the pernicious tendency to believe that science is the only paragon of knowledge and the ultimate arbiter of what counts as knowledge. And the best way to determine if you are perniciously inclined toward scientism is to see whether you vigorously deny its existence in the community.
A2) Anti-intellectualism proper. This is the thing on display when “skeptics” reject even scientific findings, as in the above mentioned case of global warming.
B) The “I’m-smarter-than-thou” syndrome. Let’s admit it, skepticism does have a way to make us feel intellectually superior to others. They are the ones believing in absurd notions like UFOs, ghosts, and the like! We are on the side of science and reason. Except when we aren’t, which ought to at least give us pause and enroll in the nearest hubris-reducing ten-step program.

C) Failure of leadership. It is hard to blame the rank and files of the CoR when they are constantly exposed to such blatant and widespread failure of leadership within their own community. Gone are, it seems, the days of the Carl Sagans, Martin Gardners, and Bertrand Russells, and welcome to the days of bloggers and twitterers spouting venom or nonsense just because they can.

Where does this leave us? Well, for one thing — at this very moment — probably with a lot of pissed off people! But once the anger subsides, perhaps we active members of the CoR can engage in some “soul” searching and see if we can improve our own culture, from the inside.

To begin with, are there positive models to look up to in this endeavor? Absolutely, and here I will name names, though the following list is grossly incomplete, both for reasons of space and because some names just happened not to come to mind at the moment I was typing these words. If you are not listed and you should be, forgive me and let’s amend the problem in the discussion thread. So here we go: Sean Carroll, Dan Dennett, Neil deGrasse Tyson, D.J. Grothe, Tim Farley, Ken Frazier (and pretty much anyone else who writes for Skeptical Inquirer, really), Ron Lindsay, Hemant Mehta, Chris Mooney, Phil Plaitt, Steve Novella (as well as the other Novellas), John Rennie, Genie Scott, Michael Shermer, Carl Zimmer, and many, many more.

Do I have any practical suggestions on how to move the CoR forward, other than to pay more attention to what the people just mentioned say, and perhaps a little less attention to what is spouted by some others who shall go unmentioned? At the risk of sounding somewhat immodest, yes, I do. Here are a few to get us started (again, discussion on how to improve the list will be most welcome). Once again, the order is pretty much random:

i) Turn on moderation on all your blogs, this will raise the level of discourse immediately by several orders of magnitude, at the cost of a small inconvenience to you and your readers.

ii) Keep in mind the distinction between humor and sarcasm, leave the latter to comedians, who are supposed to be offending people. (In other words, we are not all Jon Stewarts or Tim Minchins.)

iii) Apply the principle of charity, giving the best possible interpretation of someone else’s argument before you mercilessly dismantle it. (After which, by all means, feel free to go ahead and mercilessly dismantle it.)

iv) Engage your readers and your opponents in as civil a tone as you can muster. Few people deserve to be put straight into insult mode (Hitler and Pat Robertson come to mind).

v) Treat the opinions of experts in a given domain with respect, unless your domain of expertise is reasonably close to the issue at hand. This doesn’t mean not criticizing experts or worshipping their pronouncements, but only to avoid anti-intellectualism while doing it.

vi) Read more philosophy, it will do you a world of good. (I am assuming that if you are a member of the CoR you already do read quite a bit of science. If not, why are you here?)

vii) Pick the right role models for your skeptics pantheon (suggestions above, people to avoid are left to your keen intuition).

viii) Remember what the objectives are: to learn from exposing your ideas to the cross-criticism of others and in turn help others learn to think better. Objectives do not include showing the world how right and cool you are.

ix) Keep in mind that even the very best make mistakes and occasionally endorse notions that turn out to be wrong. How is it possible that you are the only exception to this rule?

Saturday, July 21, 2012

Rationally Speaking podcast: Philosophical Shock Tactics


Why do philosophers sometimes argue for conclusions that are disturbing, even shocking? Some recent examples include the claim that it's morally acceptable to kill babies; that there's nothing wrong with bestiality; and that having children is unethical.

In this episode of Rationally Speaking, Massimo and Julia discuss what we can learn from these "Philosophical shock tactics," the public reaction to them, and what role emotion should play in philosophy.

Julia's pick: "What Intelligence Tests Miss: The Psychology of Rational Thought."

Massimo's pick: "Graphing the history of philosophy."

References:

* Philosophical shock tactics
* Be a Communications Consequentialist

Wednesday, June 27, 2012

Time Travel, Dinosaur Clones, and How To Debate Free Will


www.cheatsplace.com
by Leonard Finkelman

Nine times out of ten, the conversation starts as follows:

“I’m Leonard,” I say. “I’m a philosopher.”
The interlocutor responds, “Oh! What do you do with a degree in philosophy?”
I shrug. “I hung mine in my apartment. It was the easiest way to prove the existence of walls.”
The interlocutor laughs politely, then remembers an apparently urgent appointment on the other side of the room.

If you become a philosopher, then a snappy way to answer the “what do you do” question is going to be an important element of your intellectual utility belt. It will get you through the vast majority of human interaction in a way that doesn’t require a constantly-updated summary of the most recent Jobs for Philosophers.

Just remember: on rare occasions, you’re going to meet that tenth person.

I had one such occasion recently, when an (admittedly comely) interlocutor started a conversation about philosophy of physics. As the discussion evolved, I eventually had to fall back on another tool in my utility belt: purposeful controversy.

“I don’t believe in free will.”
“You don’t?” The young lady seemed taken aback. She asked, “Then you believe in fate?”

That’s the question that really got me thinking, and the one that prompted this essay. I had to give a typically philosophical response: yes and no. To understand why that’s not simply a dodge, and why it’s worth revisiting an issue that’s already been covered extensively around these parts, let’s empty out the rest of my utility belt. It turns out that I don’t keep much in there: just some time travel stories and a copy of Jurassic Park (1).

For my money, there’s no entertainment more enjoyable on a greater number of levels than a good time travel story. The genre offers a number of interesting points for discussion. Does the story hold together consistently? What happened to other people in the original timeline? Is the ending to Superman: The Movie a crime against functional human brains? More important for our purposes, though, is the fact that many philosophers consider time travel to be part-and-parcel of discussions of free will.

This is not to say that all time travel is of philosophical value. After all, everyone travels forward through time, and all too often without any interesting results. Similarly, we’re not talking about the familiar DeLorean-assisted romps through spacetime. No: the philosophically valuable sort of time travel is somewhat more boring. Imagine simply rewinding the entirety of the universe back to an earlier point, then allowing it to play forward again. A time traveler in this sense would have no idea that she had traveled at all, because all of her thoughts and experiences would be exactly the same as they had been at the earlier point (2).

Consider the recently-released movie “Safety Not Guaranteed,” wherein the main character (mild spoiler alert!) entertains the idea of going back in time to prevent the death of her mother, for which the character (played by Aubrey Plaza) feels responsible. In the rewinding sense of time travel, she would find herself at a point ten years earlier in her life, thinking the same thoughts and feeling the same desires that she had previously. Would it be possible for her to behave differently, such that she doesn’t initiate a sequence of events that leads to her mother’s death? Intuitively, I say no.

But the truth is that my answer doesn’t matter, because it doesn’t say anything about fate. It turns out that time travel isn’t a particularly useful tool in discussing that issue.

I guess that leaves me with Jurassic Park.

By the end of his career, Michael Crichton would come to adopt a dogmatic opposition to the scientific establishment: in the name of “open-mindedness,” he denied global climate change, Darwinian evolution, and the proposition that airline unions don't bear any analogy with velociraptors. This conviction that science is an “outmoded practice” in the process of “destroying itself” had much more reasonable origins in Jurassic Park, wherein Crichton (rightly!) objected less to science itself and more to scientism:

“Largely through science, billions of us live in one small world, densely packed and intercommunicating. But science cannot help us decide what to do with that world, or how to live. …[its power] will be in everyone’s hands. … And that will force everyone to answer the same question—What should I do with my power?—which is the very question science says it cannot answer.”

This is just David Hume’s famed is/ought distinction, and if you don’t accept it by now, then I’m certainly not going to convince you. You’ll find it to be either axiomatically true or self-evidently false. Still: one is hard-pressed to find a balanced biochemical reaction that reads, “cloned T. rex ⇌ bad idea,” which is precisely what you should find if the distinction is denied (3).

I would imagine that the young lady seemed so surprised that I might believe in fate because the concept seems so religious. Indeed, “fate” has historically been tied to the idea of predestination, as determined by some deity or another. If Aubrey Plaza’s (fictional) mother is fated to die on some day, then she is supposed to die on that day. When she does, Ms. Plaza can’t be blamed for it: Ms. Plaza didn’t determine that fate, and so she can’t be held responsible for bringing it about.

Consider the language here. She’s supposed to die. She can’t be held responsible. These are moral considerations; we are very firmly on the “ought” side of the is/ought divide. This is very much in line with some of the earliest discussions of free will. Aristotle, for example, reserved his most extensive elaboration on the subject of free will for his Nicomachean Ethics, arguing that responsibility can only follow from deliberate choice. All of our moral reasoning depends on the assignment of praise and blame, and so we need an account of free will to justify reward and punishment.

This is why time travel stories, despite their ubiquity in discussions of free will, are such a poor tool for addressing the issue with which most people are concerned. Through these stories, we consider what could have happened previously, not what will happen in the future. We’re not talking about moral responsibility; we’re talking about metaphysical possibility. It’s possible that she won’t die. We’re on the “is” side of the is/ought divide now, and so we’re no longer addressing questions raised in the moral context (4).

“Yes and no” isn’t a wishy-washy non-answer to the question of whether or not I believe in fate because I can answer that question differently on the two sides of the is/ought divide. Metaphysically: yes, there is only one way the universe will unfold; Aubrey Plaza’s character will cause events that lead to her mother’s death no matter how many times she rewinds time. Morally: no, the way the universe unfolds isn’t the way things are supposed to be; it’s just the way things happen, in part because of our actions, and so Ms. Plaza’s character (indeed, any one of us) can still be responsible for what she does.

What do you do with a degree in philosophy? Obviously, you’d like to be doing something worthwhile and productive. It just isn’t clear to me that debating free will as a metaphysical issue accomplishes that. Intuition tells me that it’s not possible for the past to have proceeded differently; intuition may tell you otherwise. How do we resolve that debate? Our ignorance of what is and isn’t possible is so profound that philosophers can’t even agree on a system of logic with which to judge the truth of possibility claims. Discussions of free will in the metaphysical context just aren’t going to go very far; however, if philosophers get around to the moral context at all, it’s only after they’ve had the metaphysical debate.

My point is this: the philosopher’s traditional ordering of concerns—metaphysics first, ethics second—is just going to get us stalled before we ever address the issue people care about in the free will debate. Maybe, just this once, we should put ethics first.

In ten out of ten drafts, the essay ends as follows:

I’m Leonard. I don’t believe in free will, but you and I probably agree about moral responsibility anyway. I only raised free will in the first place to instigate a debate.
Why? Because that’s what you do with a degree in philosophy.
_____

(1) Even I’m surprised by the number and variety of situations for which these tools are completely sufficient. Take that, MacGyver.

(2) It’s precisely because of distinctions like these that discussions of time, whether philosophical or scientific, get so confusing so quickly. Personally, I’ve found physicist Sean Carroll to be helpful in this regard.

(3) Let’s just dispense with this debate right now: if you think that cloning a T. rex might conceivably be a good idea, then you just haven’t read Jurassic Park closely enough. Watching the movie isn’t nearly an acceptable substitute.

(4) To be sure, the metaphysical issue is not unrelated to the moral one. Stephen Jay Gould referred to fields whose subject matter lay on different sides of the “is/ought” distinction as “nonoverlapping magisteria.” He noted that “the two magisteria bump right up against each other, interdigitating in wondrously (sp) complex ways along their joint border.” The debate over free will takes place right along these disputed “is/ought” borderlands.

Saturday, June 02, 2012

Between scientists and citizens, part I


agstudy.files.wordpress.com
by Massimo Pigliucci

I’m at a conference organized by the Science Communication Project at Iowa State University, entitled Between Scientists and Citizens: Assessing Expertise in Policy Controversies. I will be blogging from some of the sessions, to give you a flavor of what the conference is about (there are four parallel sessions, and unfortunately I haven’t developed the ability to be in multiple places simultaneously — working on it, though!). I will be giving one of two keynote talks at the conference (my title: Nonsense on stilts about science, field adventures of a scientist-philosopher — I will post it over at PlatoFootnote soon).

So let’s get started with Deserai Crow and Richard Stevens (University of Colorado-Boulder) on “Framing science: The influence of expertise and ambiguity in media coverage.” Recent studies suggest that Americans are increasingly interested in but also increasingly uninformed about science. They naturally get their science information largely from the media, and media organizations treat science coverage as a niche or beat subject. Interestingly, only about 3% of journalists covering science actually have a science or math background, while most major in communication fields.

Expertise is often thought of in terms of skills, but within the context of science communication it really refers to authority and credibility. Expertise is communicated at least in part through the use of jargon, with which of course most journalists are not familiar. Jargon provides an air of authority, but at the same time the concepts referred to become inaccessible to non-specialists. Interestingly, journalists prefer sources that limit the use of jargon, but they themselves deploy jargon to demonstrate scientific proficiency.

The authors suggest that there are two publics for science communication, one that is liberal, educated and with a number of resources at its disposals; the other with less predictable and less-formed opinions. The authors explored empirically (via a survey of 108 Colorado citizens) the responses of liberal and educated people to scientific jargon by exposing them to two “treatments”: jargon-laden vs lay terminology news articles. The results found that scientists were considered the most credible sources in the specific area of environmental science (94.3% agreed), followed by activists (61.1%). The least credible were industry representatives, clergy and celebrities. (Remember, this is among liberal educated people.) Interestingly, the use of jargon per se did not increase acceptance of the news source or of the content of the story. So the presence of scientific expertise is important, not so the presence of actual scientific details in the story.

It is highly unfortunate that Crow and Stevens didn’t present the same survey on the second type of public they identified in their preliminary results. Apparently that part of the study is being carried out now.

The next talk I attended was “Reason, values and evidence: Rational dissent from scientific expertise,” by Bruce Glymour and Scott Tanona (Kansas State University). Widespread public rejection of scientific consensus in the US is often declared to be “irrational” (for instance in books by Chris Mooney). But in fact sometimes rejection of scientific claims is not irrational. Science denial can be a rational response to information which, if accepted, would induce a conflict in core values. The idea is that values underwrite all notions of rationality, but there is no theory of rationality to decide fundamental values. Indeed, trust in logic, rational choice and science can themselves be understood as values.

Consider a decision of whether to carry an umbrella with you given a certain probability of rain. Different people will fill the corresponding decision matrix differently (depending, for instance, on how much they dislike carrying umbrellas around, or getting wet, and so on). It’s not at all the case that there is one rational way to construct the matrix.

Or take logic itself. It is well known that there are situations in which different types of logic do not fare very well (propositional logic, for instance, doesn’t deal well with Sorites paradoxes). And of course there are a variety of types of logic, and it makes no sense to ask which one is the best. It depends on what you wish to use it for.

Same goes for the scientific method. There is no complete account of the scientific method, and again one can choose certain methods rather than others, depending on what one is trying to accomplish (a choice that is itself informed by one’s values). And of course the Duhem-Quine thesis shows that there is no straightforward way to falsify scientific theories (contra Popper).

If there were supernatural causes that interact with (or override) the causes being studied by science, but are themselves undiscoverable, this would lead to false conclusions and bad predictions. Which means that the truth is discoverable empirically only if such supernatural causes are not active. Science cannot answer the question of whether such factors are present, which raises the question of whether we ought to proceed as if they were not (i.e., methodological naturalism).

Is methodological naturalism wishful thinking (since it is not empirically verifiable)? If one’s primary goal is to discover truths about the world that support reliable predictions, than methodological naturalism is rational. But it can be rational to believe without evidence, or even against the evidence, again depending on what your goals are.

The best theories of rationality are instrumental. No theory prescribes core values and goals, but theories can give prescriptions for reaching goals. Such theories include instrumental values. What happens often — both in the case of science and in that of moral dilemmas — is that one’s several values come into conflict. A typical response is to deny the facts, which satisfies yet another value, the desire not to prioritize between values.

Authors suggest that the best one can do is to engage in an exercise of reflective equilibrium, which however itself cannot tell you which values are more important than others.

The last talk of the first morning was about “Expertise, equivocation, and eugenics,” by John Jackson (University of Colorado-Boulder). The author began by pointing out that historians of science are frustrated by the kind of abstract and formalized models of science developed by philosophers; the latter, however, are frustrated by historians’ detailed contextualization of science that seems to miss the general picture. He asked whether rhetorical argumentation or informal logic can provide a way to bridge the two.

Consider the terms “fit” and “fitness” in evolutionary biology. T.H. Huxley famously gave a technical definition of fitness within the theory of natural selection, though the term was borrowed from previous informal usage in lay language, where it means to be in good physical or mental shape. For eugenicists, the problem was of the survival of the unfit, so to speak, which of course would be oxymoronic if one uses the term “fit” in the technical sense. According to eugenicist Arthur Balfour, “the feeble-minded” were getting better adapted to their (social) environment, and that had to be changed by government intervention.

The author suggests that from a philosophical standpoint the problem here is simply caused by a fallacy of equivocation, switching back and forth between the technical and the vernacular meanings of “fit.” Charles Reed, another eugenicist, was also equivocating, using the term “fit” in the scientific sense when explaining the problem (claiming the mantle of Darwin for the cause), but switching to the vernacular sense when proposing social policies (to generate certain political and social overtones).

But from a historian’s perspective, eugenics was a scientific research program, a social movement, and a legislative agenda, all rolled into one. For eugenicists the political order was a product of biological race, so that to speak of political institutions was to speak of heredity and vice versa. By the end of the talk, however, I felt like more development of the idea of reconciling the philosophical and historical accounts was needed.

And so we get to the afternoon session, beginning with “The ambiguous relationship between expertise and authority,” by Moira Kloster (University of the Fraser Valley). [Unfortunately, this talk was without slides, and since it was after lunch, I paid less attention to what the speaker was saying...] The author talked about a class she teaches where students enact different roles related to expertise and authority (e.g., a doctor who advises about a cataract operation, a friend who has actually gone through such operation, etc.). The point of the exercise was to explore the idea that expert advise is insufficient to reach a decision unless one has also had occasion to reflect on what one values about the problem concerning which the expert is giving advice.

The author asks whether, for instance, a nutritionist — qua expert — has the ability to enforce a better diet in a number of particular situations. The answer is no: in a hospital context, things would also depend  on, for instance, the costs associated with different recommendations; in a political context (e.g., about vending machines in schools) there will be issues of cost as well as public reaction and so forth. So the expert’s authority will need to be negotiated in a broader context than just his particular area of expertise.

The suggestion is to bring in a different kind of expert, similar to a business coach (who does not have expertise in business, but coaches CEOs about decision making and communication). This would be, then, an individual whose role would be to advice people on how to make decisions, including taking into consideration the advice of experts.

Final talk of the day (well, before my keynote): “The ethos of expertise: How social conservatives use scientific rhetoric,” by Jamie McAfee (Iowa State). The paper [no slides!] focused on James Dobson’s Focus on the Family organization. Dobson is apparently well known for the use of “therapeutic rhetoric” as a base from which to articulate a conservative worldview.

The author based his analysis of Dobson’s influence on cultural theorists Ernesto Laclau and Chantal Mouffle’s Hegemony and Socialist Strategy as well as on Harry Collins and Robert Evans’ Re-Thinking Expertise, which attempts to describe legitimate expertise and categorizes different kinds of expertise. [I must admit that I am deeply skeptical of Collins’ work, which I find at times bordering on incoherence, like much radical sociology of science. I’m not too keen on post-modern cultural theorizing either, but I have not read Laclau and Mouffle.]

All in all, McAfee claims that Dobson has turned his “expertise” (as a therapist) into political capital, and has given himself permission to explicitly import his ideology (fundamentalist Christianity) into his role as an expert. [Yes, though we may begin by questioning in what sense Dobson is an expert on anything at all, but that would require us to step outside the postmodern / radical sociology framework.]

Well, that’s all for the first day, folks. Part II and conclusion coming soon...

Tuesday, May 15, 2012

In defense of criticism (and skepticism)


by Massimo Pigliucci

My friend Benny (who produces the Rationally Speaking podcast) really hates the word “skepticism.” He understands and appreciates its meaning and long intellectual pedigree (heck, we even did a show on that!), but he also thinks — based on anecdotal evidence — that too many people apply a negative connotation to the term, often confusing it with cynicism. (And notice, to make things even more confusing, that neither modern term has the philosophical connotations that characterized the ancient skeptics and the ancient cynics!). On the contrary, I really like the word, and persist in using it in the positive sense adopted by David Hume (and, later, Carl Sagan): skepticism is a critical stance, especially toward notions that are either poorly supported by evidence or based on poor reasoning. As Hume famously put it, “A wise man ... proportions his belief to the evidence” (from which Carl Sagan’s famous “Extraordinary claims require extraordinary evidence”).

Now, why on earth would skeptics be associated with (the modern sense of) cynicism, an entirely negative attitude typical of people who take delight in criticism for the sake of criticism, negativity for the sake of negativity? I blame — at least in part — Francis Bacon. Let me explain.

Bacon was one of the earliest philosophers of science, and his main contribution was a book called The New Organon, in purposeful and daring contrast with Aristotle’s Organon. The latter is a collection of the ancient Greek’s works on logic, and essentially set down the parameters for science — such as it was — all the way to the onset of the scientific revolution in the 16th century. Bacon, however, would have none of Aristotle’s insistence on the superiority of deductive logic (which is, among other things, the basis of all mathematics). New knowledge is the result of reduction (explaining a complex phenomenon in terms of a simpler one) and induction (generalization from known cases). Bacon thought of his inductive method as having two components, which he called the pars destruens (the negative part) and the pars construens (the positive one). The first was concerned with eliminating — as far as possible — error, the second with the business of actually acquiring new knowledge.

It’s a nice idea, as long as one understands that the two partes are logically distinct and need not always come as a package (they did in Bacon’s treatise). Think of it in terms of the concept of division of cognitive labor in science. This is an idea famously discussed by Philip Kitcher, who explored the relevance of the social structure of science to its progress, arguing that such structure — once properly understood — can be improved upon to further the scientific enterprise. The basic idea, however, is familiar enough, even in everyday life: some people are good at X, others at Y, and we don’t ask everyone to be good at both, especially if X and Y are very different kinds of activities.

The same goes, I think, for Bacon’s partes destruens and construens: he may have pulled both off in the New Organon, but the more human knowledge progresses, the more it requires specialization. We have physicists and biologists, geologists and astronomers. Not only that: we have theoretical physicists and experimental ones, and even those are far too broad categories in the modern academy (e.g., theoretical atmospheric physics requires approaches that are very different from those deployed in, say, theoretical quantum mechanics). Why not, then, happily acknowledge that some people are better at constructing new knowledge (theoretical or empirical) and others at finding problems with what we think we know, or with how we currently proceed in attempting to know (Bacon’s correction of “errors”)? Indeed, this division of cognitive labor may even reflect different people’s temperaments, just like personal preference and style may lead one to pick a particular musical instrument rather than another one when playing in an orchestra (or to become a theoretical or experimental physicist, as the case may be).

What does any of the above have to do with the perception problem from which skepticism (allegedly) suffers? Well, skeptics (and, hum, philosophers!) are in the criticism business, and nobody likes to be criticized (including skeptics and philosophers). But we may cut some slack to critics if they also propose ways forward, constructive solutions to the problems they identify. This, I think, is a mistake. Criticism is valuable per se, as a way to engage our notions, show where they may go wrong, and help (other) people see ways forward. Criticism — pace Bacon — is inherently constructive, even when negative, because it allows us to make progress by identifying our errors and their causes. And it can be highly entertaining: just read a good (negative) movie, book or art review, or perhaps watch an episode of the (now ended) Bullshit! series.

This under-appreciated role of criticism, incidentally, may also be responsible (in part, i.e. egos and turf wars aside) for the continuing diatribes between philosophers and physicists, where too often the latter do not appreciate that the role of philosophy is a critical one, with the discipline making progress by eliminating mistaken notions rather than by discovering new facts (we’ve got science for the latter task, and it’s very good at it!).

So, my dear Benny and other fellow skeptics, let’s reclaim the term skepticism as one that encapsulates a fundamental attitude that all human beings interested in knowledge and truth should embrace: the idea that mistakes can be found and eliminated. It’s not at all a dirty job, and we are able and ready to do it.

Monday, March 05, 2012

Progress in philosophy not an oxymoron


www.chsbs.cmich.edu
by Massimo Pigliucci

A little while back I tackled the perennial question of whether, and in what sense, philosophy makes progress. But that was by means of a fictional dialogue between two robots, part of my “5-minute Philosopher” series, and it’s time to revisit the topic. The occasion has been provided by a lively meetup discussion I facilitated a few weeks ago, based on an article by Toni Vogel Carey that appeared in Philosophy Now magazine.

Carey sets up the discussion by arguing that philosophy stands somewhere between science and the arts, where the first one is the common paragon of a cumulatively progressive enterprise, while within the realm of the latter the whole idea of progress appears to be ridiculous. Although there is much that I agree with in Carey’s article, this set-up strikes me as questionable, particularly because the author counts mathematics as a science. Math is certainly useful to science (and so is logic and, sometimes, even art!), but it ain’t the same thing as science. The latter is concerned with empirically based hypothesis testing, while math makes progress more like logic (a branch of philosophy!), i.e. by a deductive exploration of the consequences of sets of axioms (in logic and philosophy these are called assumptions). So math and logic represent fields clearly characterized by cumulative progress which are not science, thereby undermining the idea that science is the paragon for progressive intellectual enterprises.

Moreover, some of my fellow meetupers even questioned the idea that art doesn’t progress. Yes, as Nobel biologist Francois Jacob (cited by Carey) said, “Beethoven did not surpass Bach in the way that Einstein surpassed Newton,” but the key qualification here is in the (same) way. Beethoven explored ways of composing hitherto unknown to musicians, which has to count as progress in a meaningful (though obviously not scientific) sense of the term. I pointed out during that evening’s discussion that the invention of perspective in Renaissance painting also was an unquestionable case of progress in art, as it made possible painting in ways that were simply not available before. I’m sure other examples can be easily found, especially by historians of music and art.

The heart of Carey’s article, however, concerns three general types of progress in philosophy, each accompanied by an example. The first one is what the author refers to as “progress as destruction.” A lot of what goes on in philosophical research is showing that someone else got it wrong, thereby moving the debate onto higher ground in logical space, so to speak. Carey’s example is Edmund Gettier’s famous demonstration that Plato was wrong when he defined knowledge as “justified true belief.” Gettier did this in a very short paper, using counterexamples. The one Carey provides is actually clearer than the one originally presented by Gettier. Imagine you were watching the final of the US Open a few years back and saw John McEnroe win the match point against Jimmy Connors. Assume further that it is indeed true that McEnroe won the Open that year. Apparently, you have a belief that is both true (McEnroe did win) and justified (you saw the final play). But it turns out that — because of a technical glitch — you actually saw a replay of a similar match point that had allowed McEnroe to beat Connors the year before! Gettier would argue that you have formed a belief that is both true and justified, and yet does not amount to knowledge. Now, put away the discussion of how one could fix Plato’s definition (no one has succeeded so far), because we need to proceed to Carey’s second type of philosophical progress.

This is progress understood as clarification, the sort of thing that Wittgenstein (himself not exactly a shining example of clarity) was presumably thinking of when he said that “Philosophy is a battle against the bewitchment of our intelligence by means of language.” The idea is that philosophers understand certain issues better when they can analytically parse distinct meanings or applications of given concepts. Carey’s example is John Rawls’s analysis of rules within the context of rule- (as opposed to act-) utilitarianism. Rawls distinguished “summary” and “practice” concepts of rules, where the first one works as a heuristic that summarizes past decisions, while the latter examines particular cases of application of a given rule. Without getting into details, Rawls’ approach helped to make sense of the advantages of rule-utilitarianism over act-utilitarianism, at the same time that it also made clear that rule-utilitarianism is barely utilitarianism at all, and falls uncomfortably close to its chief rival, deontology (i.e., rule-based ethics).

The third and last situation considered by Carey is “progress as doubt,” in which philosophers provide a needed counter to over-enthusiastic practitioners of their own and of other disciplines (e.g., science), by pointing out just how much we really don’t know. Here David Hume’s famous problem of induction comes to mind. Hume argued very effectively that induction — on which much everyday reasoning and especially scientific inference are based — cannot be logically justified on independent grounds. (If you think you can get out of this by arguing something along the lines of “induction works” think again: that would be invoking inductive reasoning to support inductive reasoning, and you’d be open to one of the worst charges in philosophical reasoning, that of circularity.) One cannot avoid but think of Socrates, and of the Delphi Oracle’s statement that he was the wisest man in all of Greece, apparently on the basis that he knew that he didn’t know much.

There are certainly other examples one could line up following Carey’s approach. Quine’s criticism of the previously universally accepted distinction between synthetic and analytic statements; Popper’s proposal that scientific hypotheses have to be falsifiable, followed by a Duhem-Quine inspired argument showing that falsification doesn’t work; the increasing sophistication of different versions of utilitarian ethics (from Bentham to Mill to Singer); the various moves and counter-moves in the debate in philosophy of science between realists and anti-realists; and so on.

What all of these modes and examples of progress in philosophy have in common is that they use analysis to parse and explore the logical space in which philosophical discourse exists. One begins with a given set of assumptions and works out their implications, until someone points out a problem with some of those implications which requires either the addition of other postulates or the abandonment of the initial one and their replacement by another set that may work out better. In this sense, philosophical analysis, again, is much more similar to mathematics than to science, and the discipline of logic represents a great example of it, both because it is a branch of philosophy that has clearly made progress, and because it can be said to actually include mathematics, at least in the sense that math is also about the application of deductive reasoning to uncover the properties of systems of axioms. That said, of course, I do not expect my colleagues in the math department to move in with us, though they would certainly be welcome...