About Rationally Speaking
Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.
Thursday, September 29, 2011
details the anti-abortion movement’s current fight to restrict reproductive rights across the country.
* You might think it’s a stretch to say that we can predict whether a person leans left or right by looking at their brain scan, but Andrea Kuszewski doesn’t necessarily think so.
* You’ve probably heard of the new movie Moneyball, which stars Brad Pitt as Oakland Athletics General Manager Billy Beane. Here’s the story behind the movie. It essentially represents skepticism as applied to baseball.
* A couple weeks ago, I wrote about the United States’ use of drones to carry out strikes on suspected militants in areas of the world where the U.S. is not formally engaged in war, such as Pakistan. Two days after my essay, the New York Times reported that President Obama’s legal team is in the midst of a hotly contested debate on whether to expand the drone program to attack militants in Yemen and Somalia.
* The Guardian discusses Stephen Pinker’s new book, in which the evolutionary psychologist argues that violence is on the decline.
* Twenty percent of Americans think God is guiding the economy, according to a new poll. I’m honestly surprised that number is so low.
* NYPD Commissioner Ray Kelly has ordered his police force, the largest in the world, to stop arresting minimal marijuana possessors.
* Two (admittedly morbid) Wikipedia pages that kept me busy while home sick this weekend: a list of unusual deaths, and a list of unsolved deaths.
Monday, September 26, 2011
Indeed, the very idea of a “university” got started in Europe in the 11th century (the first one on record was in Bologna, Italy, quickly followed by Paris, Montpellier, Oxford and Cambridge), and the term initially referred to a guild of itinerant teachers, not a place. By the end of the Middle Ages, however, it started to occur to various municipalities that it was good for business to attract the best teachers on the market, and that a relatively cheap way of doing so was to offer them shelter — both in physical form as a place in which to teach and study and in the more crucial one of (some) protection from Church authorities and their persecution mania (believe it or not, Thomas Aquinas’ writings were considered too hot for public consumption for many years).
Particularly because the phenomenon is so very recent, the question of why we should finance sciences and humanities with university posts and federal grants is a good one, and should not be brushed aside by academics. The latter attitude is too often the case, for instance whenever my colleagues in the sciences tell people that “basic research leads to important applications in unforeseeable ways,” or that whatever they happen to be doing is “intrinsically interesting.”
Let’s start with the first response. Yes, it is easy to come up with the historically more or less accurate anecdote that links basic research to some application relevant to the human condition, though of course only positive examples tend to be trotted out, while the interested parties willfully ignore the negative ones (basic research in atomic physics led directly to Hiroshima and Nagasaki, for instance). But the fact is that I have never actually seen a serious historical or sociological study of the serendipitous paths that lead from basic stuff to interesting applications (this article, focusing on math, does report on some more systematic attempts in that direction, but it still feels very much as cherry picking).
Yet, it seems that an evidence-based community such as the scientific one (the problem of applications doesn’t even arise in the humanities, obviously) would be interested and capable of generating tons of relevant data. What gives? Could it be that the data is out there and it doesn’t actually back up the official party line? Possibly. More likely is that the overwhelming majority of scientists simply doesn’t give a damn about applications of their research (again, the issue isn’t really one that humanists are even confronted with; and besides, have you compared the budgets of a couple of typical philosophy and physics departments recently?). I certainly didn’t care about it when I was a practicing evolutionary biologist. I did what I did because I loved it, had fun doing it, and was lucky enough to be paid to do it (in part, the other parts being about teaching and service). Oh, yes, I to dutifully wrote my “impact statement” for my National Science Foundation grants, in which I allegedly explained why my research was so relevant to the public at large. But the truth is that everybody’s statement of that sort is pretty much the same: disingenuous, very short on details, and usually simply copied and pasted from one grant to another.
Which brings me to response number two: it’s intrinsically interesting. I never understood what one could possibly mean by that phrase other than “it is interesting to me,” which is rather circular as far as explanations go. Perhaps we could get scientists to agree that, say, research on the origin of life is “intrinsically” more interesting than the sequencing of yet another genome, or the study of yet another bizarre mating system in yet another obscure species of insects. But then one would expect much of the research (and funding!) being focused on the origin of life question rather than on those other endeavours. And one would be stunned to discover that precisely the opposite happens. In fact, as John R. Platt, a biophysicist at the University of Chicago, famously wrote in an extremely cogent article on “strong inference” published in Science in 1964: “We speak piously of ... making small studies that will add another brick to the temple of science. Most such bricks just lie around the brickyard.”
There is a third way to show that what you do is worth the university paying for, one that is increasingly welcomed by bean counting administrators of all stripes — from NSF to your own Dean or Provost: impact factors. These days, in order to make a case for your tenure, promotion or continued funding, you need to show that your papers are being cited by others. Again, the game largely concerns the sciences, since the number of scientific journals, scientific papers, and their consumers vastly outnumbers those of humanist fields. (I can easily catch up with pretty much everything that gets published in philosophy of biology these days, but the same feat was simply impossible for any human being when my field was evolutionary biology — and the latter isn’t that large of a field compared to other areas of biology or science more broadly!)
The problem, of course — as pointed out by Tim Harford in the article mentioned above about mathematics — is that this solves precisely nothing, for a number of reasons. First, because impact factors, despite the fact that they are expressed by numbers, still reflect the qualitative and subjective judgment of people. Yes, these are fellow experts in the relevant scholarly community, which is certainly pertinent. Firstly, scientific communities tend to be small and insular, as well as prone to the usual human foibles (such as jumping on the latest band wagon, citing papers by your friends and avoiding those of your foes like a pest, indulging in a comically absurd number of self-citations, etc.). Secondly, impact factors only measure the very short term popularity of particular papers, not the long term actual impact of the corresponding pieces of research. Perhaps that’s the best that can be done, but it really doesn’t seem even close to what we’d like. Thirdly, no impact factor actually measures anything whatsoever to do with “impact” in the broadest, societal, sense of the word. Which brings us back to the original question: why should society give money to such enterprises, and at such rates?
The answer is prosaically obvious: because society gets a pretty decent bargain out of allowing bright minds to flourish in a relatively undisturbed environment. Academic careers are hard: you need to get through college, five to seven years of PhD, one, two, more often than not three postdocs, and seven more years of tenure track, all to land a stable job (undoubtedly a rare commodity, especially in the US!), a decent but certainly not handsome salary, and increasingly less appealing (but still good) benefits. Oh, and a certain amount of flexibility as to when and how much to work. (None of the above, of course, is guaranteed: the majority of PhD students do not find research positions in universities, period.) Trust me: nobody I know in the academy goes through the hell of the PhD, postdoc and tenure process just so that she can (maybe) land a permanent job with flex time. We all do it because we love it, because — like artists, writers, and musicians — we simply cannot conceive doing anything else worthwhile in our lives. (Incidentally, the term “scientist” was coined by William Whewell, a philosopher, in the 19th century; it was in direct analogy to “artist” — an analogy that is more meaningful than most modern artists and scientists seem to realize.)
Passion is, after all, the same response one gets from non-scientific academics (who usually can’t fall on the “what I do matters to society in practical ways” sort of defense). It’s also why civilized nations support (yes, even publicly!) the fine arts: scholarship and artistic creativity simply make our cities and countries much better places to live.
Of course, something tangible is (indeed, ought to be) required of academics in return. And this something is to be found in the other two areas (outside of scholarship) on which academics are judged by their peers and by university administrators (though it would be so much better if the latter simply confined themselves to, well, administration): teaching and service. And by service I don’t mean that largely (though not exactly entirely) useless and mind-numbing type of “service” one does for one’s own institutions (committees’ membership, committee meetings, committees on committees, and the like). I mean service to the community, which comes in various forms, from writing books, articles and blogs aimed at the public, to giving talks, interviews, and so forth. Service, in my view, means taking seriously the idea of a public intellectual, an idea that would only increase the quality of social and political discourse in any country in which it is taken seriously.
What about teaching? Well we (almost) all do it — unless you are so good at scholarship that the university will exempt you from doing it, a situation that I think is quintessentially oxymoronic (shouldn’t our best current scholars excite the next generations?). But do we do it well? Murrey Sperber, in his Beer and Circus: How Big Time College Sports Has Crippled Undergraduate Education, talks about the myth of the good researcher = good teacher, a myth propagated (again, curiously, without the backing of hard data) by both faculty and administrators. At least on the basis of my anecdotal evidence I am convinced (‘till data will show otherwise) that the two sets of skills are orthogonal: one can be an excellent researcher and a lousy teacher, and vice versa - one can be an excellent teacher while being a lousy scholar (though, obviously, one cannot be a good teacher without understanding the material well).
Sure, you will hardly find faculty members at any university who are both a lousy teacher and a lousy scholar: why would anyone hire that sort of person? But you will find examples of all the other three logical categories (with different admixtures of types depending on what college we are talking about), and I honestly have no idea of what percentage of us falls into each of them. (We all, of course, think that we are above average teachers as well as above average scholars, but that sounds a lot like the sort of wishful thinking that goes on in Lake Wobegon, where all the women are strong, all the men are good looking, and all the children are above average...)
The way I see the bargain being struck between society and scholars these days is this: the scholar gets a decent, stable job, which allows her to pursue interests in her discipline, no matter how (apparently) arcane. Society, in return, gets a public intellectual who does actual service to the community (not the stuff that university administrators like to call “service”), as well as someone who takes her duty to teach the next generation seriously, which means honestly trying to do a good job at it, instead of looking for schemes to avoid it. Sounds fair?
Sunday, September 25, 2011
From acupuncture to chiropractic, from yoga to meditation, what do we make of instances where something seems to have the desired effect for the wrong reasons (e.g., acupuncture), or might otherwise be a perfectly acceptable technique which happens to come intricately bundled with mysticism (e.g., yoga)?
Friday, September 23, 2011
Not for Profit: Why Democracy Needs the Humanities. It is a manifesto in defense of critical thinking, the role of the humanities (alongside science) in liberal arts education, and the crucial contribution of the latter to an open democratic society. But this is not what this post is about, largely.
Rather, I want to focus on a somewhat peripheral discussion that Nussbaum engages in, in chapter 3 of her book (entitled “Educating citizens: the moral [and anti-moral] emotions”). Nussbaum briefly relates three famous experiments demonstrating how easy it is to lead people to engage in bad behavior. The first experiment was conducted by Stanley Milgram (and has been repeated several times since). It’s the one where people were convinced to administer what they thought were increasingly painful electrical shock to “subjects” (in reality, confederates of the experimenters) who were allegedly being used to study the connection between learning and punishment. The results clearly showed that a figure of authority (a “doctor” in white lab coat, for instance) can easily induce people to engage in what would normally be considered cruel behavior towards strangers. Milgram himself set out to do the experiments because he was interested in the question of what could have possibly led so many Germans to acquiesce and collaborate with the Nazi policies of extermination during World War II.
The second experiment mentioned by Nussbaum was conducted by Solomon Asch to explore the effects of conformity. In this case subjects were shown, for instance, images of lines of different lengths and were asked to make judgments about their relative lengths. Unbeknownst to them, a number of confederates were pretending to participate in the experiment, but in reality gave coordinated wrong answers to the questions. Astonishingly, a number of subjects began to agree with the confederates, even though it was very clear that they were agreeing to the wrong answer.
Finally, Nussbaum refers to Philip Zimbardo’s experiment on prison dynamics, during which subjects told to play the role of prisoners or prison guards in a correction facility quickly began to behave as victims and oppressors respectively, with the first group passively accepting violence and the second one escalating their practices to include torture.
The typical interpretation of experiments such as those above is that people are easy to manipulate and that beneath a veneer of civility we can all be lead to inflict pain (Milgram and Zimbardo), be willingly victimized (Zimbardo), or endorse obvious falsities (Asch). But Nussbaum turns our perspective around and argues that another way to look at exactly the same data is that it is relatively easy to avoid the above mentioned negative outcomes by paying attention to the structure of our society (and — which goes with the main topic of her book — to the way we educate our children to be full members of that society).
In particular, Nussbaum argues that three types of structure are pernicious because they are conducive to bad human behavior (though they are most certainly not its only determinants): lack of personal accountability; discouragement of dissent; and de-humanization.
Lack of accountability is what we see in action in the Milgram experiments, where people get to delegate moral responsibility to the authority (and notice that the authority there was a scientist, not a nazi with a machine gun); discouragement of dissent is what happened during the Asch experiment, where people gave what they probably knew was the wrong answer because everyone else around them was doing the same (indeed, crucially, when the experiment was conducted allowing just one of the confederates to openly dissent, subjects were much less likely to adopt the groupthink attitude); finally, de-humanization is what characterized the Zimbardo protocol.
It should be easy to see at this point why Nussbaum links these structural issues to liberal arts education. At its best, teaching humanities (and science) is precisely about encouraging students’ willingness to question authority (against Milgram type effect), to speak out even when in a minority position (contra Asch), and to appreciate differences between genders and across cultures as quintessentially human (against Zimbardo).
Instead, we spend increasing amounts of time and money making sure that “no child is left behind” by having kids learn how to pass a standardized test that has little if any relation to the structural issues affecting human behavior in modern society.
Tuesday, September 20, 2011
how I answered them.
* In defense of naturalism. As a naturalist, I find this defense pretty darn unconvincing.
* Keynes: the sunny economist.
* Is Texas about to give us yet another dangerous dumb ass for President?
* New Atheism: Kitcher better than Dawkins?
* When superstition kills via mind-body connection.
* My Amazon review of Martha Nussbaum's Not for Profit: Why Democracy Needs the Humanities.
* When neuroscientists and philosophers of mind clash.
* Stop talking about evil and do something about evildoers.
* Michelle Bachmann takes never ending liberties with reality.
* Epicureanism: the most important underestimated engine of the Renaissance?
* My interview with Gelf Magazine about humanism and skepticism.
* Here is how low CNN has sunk. Not to mention the Tea Party. Let's kill the uninsured.
* Corporations as bad spouses.
Saturday, September 17, 2011
One of the nice surprises of the conference was an evening session (8:30pm on a Sunday) on “Comics and Philosophy,” featuring a talk by Christopher Belanger, of the Institute for the History & Philosophy of Science & Technology at the University of Toronto. The talk specifically focused on “Counterfactual Cognition and Ethical Dilemmas: Lessons from Duncan The Wonder Dog,” and just in case you are wondering who Duncan The Wonder Dog is, I’ll spare you the google search.
Christopher entertained an audience of more than a hundred young people (some tattooed, some dressed in shall we say highly imaginative and unconventional ways) while trying to explain that comic books can function like thought experiments to explore the implications of counterfactual conditionals.
Christopher was exploiting a growing movement sometimes referred to as “... and Philosophy” in which academic philosophers write for the general public using pop culture as a vehicle. I have contributed to a few of these myself, particularly The Daily Show and Philosophy (on the Socratic method), and the forthcoming Sherlock Holmes and Philosophy (on logic and inference) and The Philosophy of the Big Bang Theory (on scientism). (Also check my student Leonard Finkelman’s contribution to Green Lantern and Philosophy. He also has one coming up on the complexities of the relationship between Superman and Lex Luthor.)
The point of events like the one at DragonCon and of the “...and Philosophy” books (there are several other series by other publishers, by the way) is to bring philosophy to the general public using a palatable and yet informative platform. And that’s where the trouble starts. It seems like most of my colleagues cannot be bothered to hide their contempt for such lowly degradations of their cherished discipline. Never mind that if philosophers (and other academics) insist in not talking to the public because they are too busy analyzing (for the thousandth time) every single phrase of every one of Kant’s minor works, very soon there won’t be an academic discipline of philosophy at all. That’s because academic departments do not exist to do scholarship, but to serve students. The scholarship is a perk one gets in exchange for the grueling career path that goes through an endless PhD, one or more postdocs, and seven years of tenure track. But make no mistake about it: it’s a perk, not a right, and most certainly not the raison d'être of academia.
This is true also for the sciences, though to a lesser extent because scientists typically bring in the other currency that administrators care about: hard cash from grants. Even so, when I was running a lab it was the same story: writing for a blog, organizing “Darwin Day,” writing books for the public and so on were activities looked upon with a mixture of amusement and disgust. Clearly, if someone spends time doing that sort of thing s/he cannot be that good a researcher, otherwise s/he would care much more about another grant proposal or published paper (never mind that about a third of papers published in primary science journals are never cited once, and that most of the rest are read only by a handful of the author’s close colleagues and friends).
Michael Shermer told the story of how Carl Sagan didn’t make it into the National Academy of Sciences because of the perception that he wasn’t a sufficiently productive scientist — even though the record shows that he was as or more productive than plenty of others who were in fact admitted into that august body. Things have surely improved a bit since, as shown for instance by the fact that Stephen Jay Gould was later welcomed into the NAS, despite a Sagan-like perception of him shared by many of his colleagues. It is also now the case that the NAS, the American Institute of Biological Sciences, the Society for the Study of Evolution and a number of other organizations have moved beyond just paying lip service to the idea of talking to the public and have actually started to take the concept seriously. The NAS publishes position papers and organizes workshops on issues such as climate change and evolution, the AIBS hosts regular workshops for teachers, and the SSE has instituted a permanent education committee that bestows an annual prize to scientists who make a contribution to the public understanding of evolution — it’s called the Stephen Jay Gould prize, and has so far been awarded to Genie Scott of the National Center for Science Education, Sean Carroll, and most recently Ken Miller.
Philosophers have been a bit more slow to pick up on the idea that they need to talk to the public, but there are at least three good reasons to do it: a) it is the public that pays for most academic positions in university departments; b) the continued existence of philosophy as a professional academic discipline depends on people giving a damn about it; c) it is one of the goals of a field whose etymology traces back to the Greek term for “love of wisdom” to expand the circle of people capable of thinking philosophically.
Of course, the problem here may in large part be yet another consequence of American anti-intellectualism (other examples include the election of George W. and the popularity of Jersey Shore). I first noticed this when I realized that all three magazines of philosophy for the general public of which I am aware (Philosophy Now, The Philosopher’s Magazine, and Think) are published in England, despite the fact that the overwhelming majority of practicing philosophers is to be found in the US. And let’s not even get started on the fact that philosophers are regular guests (and sometimes hosts) of radio and TV programs throughout Europe, particularly in France and the UK (take that, Oprah!).
I keep being told that philosophy is a stuffy old field that cannot possibly interest the public, and yet my own regular philosophy meetup in New York is almost a thousand members strong — and we are neither the only nor the largest such group in the city! Events that I help organize with the Center for Inquiry and other local groups belonging to the Reasonable New York coalition, for instance on the nature of consciousness, on ethics for secular humanists, and an upcoming one on free will, regularly draw hundreds of paying participants, filling up whatever venue we set aside for them. And the Rationally Speaking podcast — which often deals with philosophical issues or at any rate adds a philosophical flavor to whatever Julia and I talk about — gets downloaded between 10 and 30 thousands times per episode. Other philosophy podcasts, like Philosophy Talk or Philosophy Bites, do even better.
So I guess I shouldn’t have been surprised at seeing more than a hundred young people huddled in a hotel conference room to listen to the connection between comics, counterfactuals and possible world scenarios. But it surely was a hell of a validating and entertaining way to spend an evening.
Thursday, September 15, 2011
writes that that advances in science demand an earlier introduction to ethics.
* Do statistics take the wonder out of sports? That’s the question Joe Posnanski, perhaps the best living baseball writer, considers in one of his recent blog posts.
* Every four years, the United States Conference of Catholic Bishops publishes a report on how Catholics should think about important political issues in light of church teachings. Yet most Catholics apparently ignore this seemingly fundamental document.
* Victims of sexual abuse by Catholic priests have accused Pope Benedict XVI, the Vatican secretary of state, and two other high-ranking Holy See officials of crimes against humanity, in a formal complaint to the international criminal court (ICC).
* A couple of weeks ago in New York City, Massimo and I participated in a panel discussion on secular ethics. Here is the full video.
* The Tennessean details how Jay Sekulow — best known for his legal work at Christian broadcaster Pat Robertson’s American Center for Law and Justice — and his family have made millions of dollars from their so-called “legal charities.”
* The Mississippi Supreme Court has ruled to allow a ballot initiative that would amend the state constitution so that, “The term ‘person’ or ‘persons’ shall include every human being from the moment of fertilization.” Voters will decide the issue in the Nov. 8 election.
Tuesday, September 13, 2011
* Okay, I admit it. I don’t actually know which decimal place. But it’s definitely not the first, second, or third.
Monday, September 12, 2011
Is there a misogyny problem in the skeptic and atheist communities? Why aren't there more more women involved in these communities? Also, Julia tells us about her own experience as a young woman skeptic.
Saturday, September 10, 2011
Well, it’s time to bring this overly long series on ethics to an end, for now. The previous six posts have gathered a total of 390 comments at last count, and undoubtedly this post will add significantly to the total — a clear demonstration that moral philosophy is as popular and as controversial as always.
I sincerely hope that readers didn’t — despite my clear warnings — expect to find anything like an exhaustive treatment of the various aspects of ethics, nor to be served with my own original moral system emerging at the end of the series. This was simply an exercise in clarifying my thinking about something I care a lot about, and — as the motto of this blog says — to nudge truth to spring from argument amongst friends.
Nonetheless, I promised, and fully intend to deliver, some summary thoughts that have been shaped while doing the background readings for the series and then writing the individual entries. I tend to do much of my thinking while having discussions or writing (which for me is a time-delayed type of discussion), so this was the perfect medium to probe my own intuitions about moral philosophy. Here we go, then.
To begin with, I return to the opening essay, where I suggested that ethics is neither about absolute moral truths nor about relativism. The only sense I can make of the idea of absolute moral truths is in Platonic terms, similar to the way some mathematicians and philosophers of mathematics think of numbers, theorems and the like as having an ontological status independent of the human mind. Pythagoras’ theorem is, in a counter-intuitive and non-trivial sense, “out there.” But this can only mean that wherever conscious beings capable of abstract thought think along certain lines (i.e., about geometrical figures in plane geometry) they will have to agree that the theorem is true; certainly not in the sense that there is a non-physical realm where numbers and theorems happily while the time away.
Even so, the case for Platonism has certainly not been clinched for mathematics, and it looks even less promising for ethics. In other words, I agree with M.L. Mackie’s famous “argument from queerness” that “If there were objective values, then they would be entities or qualities or relations of a very strange sort, utterly different from anything else in the universe.” Not impossible, but extraordinary claims require extraordinary evidence, you know.
As for relativism, I simply find it preposterous, despite the fact that it is actually becoming increasingly popular among both the general public and professional philosophers. I think something is missing when someone says that moral rules are of a kind with rules of etiquette (if you actually act on such belief society will treat you as a psychopath, and rightly so), or that committing or not committing genocide cannot be distinguished from preferring vanilla or chocolate (chocolate is the objectively obvious answer, by the way). Yes, there is a significant amount of spatial and temporal cultural variation in what people value and what they consider moral or not. But the extent of such variation has been greatly exaggerated (see also here), and flies in the face of both a large number of human universals and of studies showing that even other social primates seem to share our sense of right and wrong about certain actions (intuitively, since presumably they don’t do philosophy).
In order to steer away from both the Scylla of absolutism and the Charybdis of relativism, therefore, I am convinced that the best way to think of ethics is as a set of tools to think rationally and instrumentally about how to achieve a society that is as just as possible, where people can flourish (in their varied ways) as much as possible. (Yes, I know, people keep asking what counts as well being: you’ll find a thorough discussion here.) Of course someone will immediately object that no such moral system can be “compelling,” and I honestly have no idea what they mean by that. Obviously, morality isn’t as “compelling” as, say, gravity. But neither is mathematics. You are perfectly free to disagree with the Pythagorean theorem, though that simply means you don’t understand geometry. Similarly, you can shrug off the entire idea of ethical reasoning and simply keep watching out for number one. Be my guest, but I’ll think of you as a psychopath or a pathological egoist, and I won’t invite you for dinner.
Okay, now what about the six central themes of this series? We have looked at the three fundamental theories of ethics: consequentialism, deontology, and virtue ethics. We have also looked at the concept of justice from the point of view of various social contract theories as well as of several — remarkably diverse! — ideas of what counts as equality.
To begin with, an important distinction is to be made, as we have seen, between consequentialism and deontology on the one hand and virtue ethics on the other. The first two are answers to the question: what is the right thing to do? The latter is an answer to the question: what sort of life should I live? The two questions are different enough that it really isn’t entirely clear to me why virtue ethics is considered an alternative to the other two.
Nevertheless, among these three, several readers have correctly picked up on my (qualified) sympathies for virtue ethics, properly updated and without the obvious stench of elitism that accompanied Aristotle’s version (oh, and no slavery; oh, and equal consideration for women). There are several reasons for this. First off, I simply can’t get past the fact that there are serious objections to consequentialism, and particularly to its chief mode, utilitarianism. Yes, I’ve read utilitarians’ responses to classic problems like the one posed by the doctor who is considering to cut up a healthy person in order to save five dying people. But I just don’t find them convincing enough. Utilitarians are forced to twist themselves into logical pretzels to avoid the obvious implications of an ethical system that cares only and exclusively about consequences. Consequences are important, but they are not the only or final arbiter of a moral life.
Deontology does incorporate directly ideas about rights, which are notoriously difficult to digest for utilitarians, but it does so at a high price. Without having to go to the extremes of Kant (who, as I mentioned, once famously said that it is “better the whole people should perish” than that injustice be done — one wonders, injustice to whom?) it just seems that a set of inflexible rules, and even more so a single all-encompassing rule like the categorical imperative, is far too blunt a tool to deal effectively with the variety of human experience. No, I think that if we followed either utilitarianism or deontology we would far too often arrive at monstrous ethical decisions with which we simply wouldn’t be able to live.
Which of course leaves virtue ethics as the last man standing. This is not an unproblematic option, because of the variety and complexity of human ways to flourish, and because it is about character, not about which particular actions are right or wrong. But it does capture the idea that there is something common to all human beings (and possibly other relevantly similarly social creatures), that life is better when people are fair to each other, refrain from violence if not absolutely necessary, act with integrity, respect other people’s civil liberties, have access to education and health care, and can generally pursue their interests with the utmost degree of freedom compatible with everyone else doing the same.
But virtue ethics is not a theory of society, it is a theory of individual behavior within society. Which brings us to social contracts and the various forms of egalitarianism. I tend to be sympathetic to a higher degree of egalitarianism than is materialized in the current state of affairs in the United States, but unlike Rawls I am not convinced that income and wealth ought to be equal except under very strict circumstances. I do, however, find the current level of income/wealth inequality in the United States appalling and indefensible except by a relatively small but exceedingly vocal horde of libertarians, Randians and Teapartiers.
I do find Rawls’ concept of a veil of ignorance to be by far the best way to think about a social contract, especially in multicultural societies. I especially like Rawls’ idea (embodied in his two principles of justice) that civil liberties ought to take precedence over economic advantages (precisely the opposite of what currently happens in the US). But it is certainly the case that Rawls’ ideas apply only if a society is guided by certain types of liberal values that have predominated in Western societies and in some non-Western ones (e.g., Japan). If you are into the lure of theocracies or totalitarian regimes you will be largely unmoved by his thought experiment. I wager that you and your society will be so much the worse for it.
Getting back to egalitarianism, however, even if we stay away from income and wealth it is pretty clear that much of the world (US included) is far from being anywhere near a just society. We still do not have complete formal equality of civil rights (think gay marriage), and arguably we are far from actual equality in that department (think about the conditions of a number of minorities, as well as persistent degrees of discrimination against women). We may say that all citizens have equal rights in front of the law, but the practice is such that we keep imprisoning a good number of innocent poor and uneducated people, while robber barons keep crashing the world economy and getting away with golden parachutes. We think that we live in a democracy where every citizen has one vote, but in fact the US Supreme Court has legally allowed corporations to freely buy elections, and we have a Congress occupied by a large number of millionaires (all currently serving US Senators are in that category) who make laws favoring their ilk. Not to mention the arcane two-Senators-per-State system which effectively means that the voters of Wyoming (the least populous state) are almost 69 times better represented than the voters of California (the most populous state).
So, I guess in the end I find myself to be a virtue ethicist when it comes to personal morality, with strong Rawlsian leanings in the social sphere, who would allow a limited amount of income and wealth disparity but is uncompromising about civil liberties, equality of representation and equality within the justice system. This is far from being a logically tight, perfectly coherent approach to ethics, of course. But, as Walt Whitman famously put it: “Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.”
Thursday, September 08, 2011
unmanned aerial vehicles, to carry out warfare in recent years. Drone attacks have been particularly popular under President Barack Obama’s administration. According to the New America Foundation, there were 43 drone attacks between January and October 2009 (right when Obama took office), compared to just 34 in all of 2008 (when George W. Bush was still in office). The Obama administration has shown no indication that it will halt their use.
The government’s increased reliance on drones has sparked public debate: Are drone strikes legal? Are they ethical? In my reading of various news and opinion articles on the issue, those who object to drones have most often made three arguments:
1. Drones violate domestic law. Many, or even most, drone strikes take place in Pakistan or other Middle Eastern countries where the US has not declared war against a foreign state, but is instead working with local officials to root out terrorists under some “handshake agreement.” As such, many people feel drone strikes are an unjustified use of presidential and military power. US officials defend drone strikes on the grounds that they do not target a formal state, but a small group of people that have carried out attacks on domestic soil and plan to do so again. Thus, formal warfare laws do not apply (in other words: hey, it’s just the never-ending War on Terror).
2. Drones violate international law, which restricts when and how different states can engage in armed conflict. Yet, as with domestic law, there is no conflict between two formal states. Also, most drone strikes are carried out by the CIA, which as a civilian agency and a noncombatant under international law is not governed by the same laws of war that cover US military agencies.
3. Drones kill civilians. The Wall Street Journal reported via intelligence officials that since Obama took office, the CIA has used drones to kill 400 to 500 suspected militants, while only ~20 civilians have been killed. However, in 2009, Pakistani officials said the strikes had killed roughly 700 civilians and only 14 terrorist leaders. Meanwhile, a New America Foundation analysis in northwest Pakistan from between 2004 to 2010 reports that the strikes killed between 830 and 1210 individuals, of whom 550 to 850 were militants (about two-thirds of the total).
These arguments are nuanced and complex. You can read more about US arguments and other counter-arguments in this excellent article in the Wall Street Journal. But let us put these — and any discussion of just war theory — aside for a moment, for I think there is a more basic ethical point here.
Notice that the objections above do not inherently reject the use of unmanned drones. Instead, they focus on international law, domestic law, and the accuracy of drones. This raises an important question: are drone strikes inherently any more or less ethical than, say, manned aircraft strikes? Is there, or should there be, an ethical distinction between launching missiles from half a world away and sending fighter jets to carry out such an attack?
I have pondered these questions for several days now and have come to the tentative conclusion that there is no ethical distinction. In my view, the method in which war is carried out — by drone, jet, or a missile launched from a nuclear sub — is less important than the pretenses under which war is being carried out in the first place. If an act of war violates domestic or international law, it does so regardless of whether the attack was carried out by a manned or unmanned aircraft. If an act of war kills civilians, one must parse whether civilians were intentionally or knowingly put at risk, or whether it was an issue of collateral damage. But I have seen no indication that drones kill more civilians on average than manned strikes (your research is welcome). So why is there such an objection to, specifically, drone strikes?
In reading objections to drone use, I can’t help but feel an unspoken and lurking moral sentiment that drone use is wrong because it removes the human element of war. That is, people reject the use of drones because drones remove a pilot (or submarine crew) from harm’s way.
Consider these three passages. The first is from a story in the news outlet Christian Century:
With drones, operators sitting in front of computer monitors in Virginia and Nevada can target enemies halfway around the world. When their shift is done, drone operators retire to their suburban homes.
The second is from an essay in the Catholic magazine America:
Killing with drones is made easy for operators, who often work at great distances from the scene of attack. An Air Force ‘pilot’ may be in Nevada, while C.I.A. operatives are in Langley, Va., and others, including private contractors, are in Florida, Pakistan or Afghanistan. An operator may launch an attack from a trailer in Nevada viewing a computer monitor and using a joystick. The operators never see the persons they have killed. The pilot of a fighter jet flies over the place where the attack will occur and risks being shot down; a drone pilot never experiences the place where the attack occurs and knows he or she is in no personal danger. The operator can go home at the end of the shift.
The third is from an article on PBS.org:
Missile strikes launched from the comfort of Langley, Virginia, a half a world away from Waziristan, ... to critics, remain morally problematic.
On one hand, this seems backward to me. Drones actually remove a pilot or crew from harm’s way, and so they would seem a better manner of carrying out war. Imagine being able to carry out attacks on highly dangerous terrorists and counter-insurgents without having to put your own people at risk of death. This would seem desirable.
On the other hand, perhaps there is something to the idea that warfare made easier means warfare more often; that the more we remove the human element from one side of warfare, the more that side becomes willing to commit to warfare. This does not seem necessarily true, as warfare has not increased — and might be decreasing — with increasing technology. But I am also not entirely sure it is a compelling argument against drone use. Rather, it seems an argument against any advance in military technology — from guns that allow troops to shoot their weapons from further away, to planes that allow forces to drop bombs from higher elevations, to even bulletproof vests that provide more safety to soldiers engaged in war.
But, as always, I offer my thoughts to the peer review of Rationally Speaking. What do you think?
Tuesday, September 06, 2011
Saturday, September 03, 2011
Friday, September 02, 2011
comprehensive new public opinion survey on the attitudes of Muslim-Americans. The findings might surprise many Americans.
* Last week I agreed with Pope Benedict XVI, who argued that ethics should play a major role in economic policy making. The Pope’s sentiment has now been echoed by Cardinal Angelo Bagnasco.
* A new study suggests El Niño may be to blame for nearly a quarter of recent global conflicts.
* “When it comes to the religious beliefs of our would-be presidents, we are a little squeamish about probing too aggressively,” writes Bill Keller in the New York Times.
* Scientific findings suggest that exercise could be a helpful prescription for depression, though there are caveats.
* Charles Blow highlights what he considers a growing crisis for American children, and criticizes politically right approaches that he says “ignore that reality at best and exacerbate it at worst.”
* Peter Nardi notes that psychics have a perfect record: of being wrong, that is.
* And lastly, two follow-ups on my recent essay on Florida’s law that requires drug tests for welfare applicants. First, 98 percent of welfare applicants have passed the drug test. Second, Adam Cohen has a compelling article on why this is bad policy.