About Rationally Speaking
Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.
Friday, March 30, 2012
* Should doctors who oppose mandatory ultrasound laws engage in “civil disobedience”? This anonymous doctor says “yes.” Amanda Marcotte says “no.”
* You’ve probably heard conservative and religious leaders lament the recent “moral decline” of the United States. Yet are we truly in moral decline, or is the country just shifting away from traditional, religious morality? That’s the question taken up in a new article in the Economist, which finds little evidence to support the former position.
* The National Organization for Marriage (NOM), which advocates for restricting the legal definition of marriage to one man and one woman, has announced an international protest of Starbucks over the company’s support of marriage equality. Apparently it’s not going well.
* The very cool story of a giant, weird-looking, six-legged bug that was thought to be extinct, but had just found a hard-to-find home.
* I told you a couple of picks back that Harvard University political philosopher Michael Sandel has a new book coming out April 24, titled What Money Can’t Buy: the Moral Limits of Markets. Now I’ve also got a preview article by Sandel, and a clip of the audiobook.
* “Why we need sex ed now.” Check out the graphic sent to me by the people at Public Health Degree, who are apparently fans of my blog.
* Here’s a fun web site to spend a couple of minutes on: “What the heck has Obama done so far?”
Thursday, March 29, 2012
* FT’s Julian Baggini tries to make sense of the free will debate by reviewing some recently published books on the subject.
* Bank of America should have gone out of business back in 2008. Too Big to Fail is one thing, Too Corrupt to Fail is another.
* Did Jonah Lehrer stretch the facts in his newest book Imagine? Tim Requarth and Meehan Crist offer up a very solid critique of the book.
* Is there a reason why men tend to be overly represented in some scientific fields? In his book, Is There Anything Good About Men?, psychologist Roy Baumeister provocatively explores the sensitive topic of gender.
* Does one need to be a great thinker in order to be a great speaker? Paul Graham pens an essay about the difference between writing and speaking.
* What happens when the robots get really good? The legendary character named Ned Ludd may be vindicated after-all. The Luddite Fallacy looks fallacious until it isn’t.
Wednesday, March 28, 2012
In this episode, Massimo and Julia open up the black box of peer review, explaining how the process originated, how it works, and what's wrong with it.
They also try brainstorming ways it could be fixed, and ask: how is the Internet changing the way we do research?
Monday, March 26, 2012
I currently have two other posts in various stages of completion, but I’m mad about this now. Discussions about headphones and human nature or the transformative beauty of classification can wait (1). A new gauntlet has been thrown down. More accurately: a musty, worn, thoroughly decrepit gauntlet has been thrown. “Diseased” might also be an accurate descriptor, if you go for Dawkins’ whole “viruses of the mind” thing. Nine decades after its state hosted the “monkey trial” of John Scopes (2), the state senate of Tennessee has passed “teach the controversy” legislation that undermines science education by overemphasizing disputes between scientists (3). As is usually the case with such legislation, “the theory of evolution” has been singled out. A group of friends recently asked my thoughts, in my capacities as a philosopher of biology, on why this issue keeps cropping up.
I could swim (and, in a sense, have swum) in all the ink that’s been spilled on this subject. The probability that I can offer anything new or uniquely insightful is marginal at best. Still: this is war, or at least as close to it as academics get. One makes what contributions one can to the effort.
Alfred North Whitehead famously suggested that the safest generalization about Western philosophy is that it consists of a series of footnotes to Plato (4). It’s true that almost every live debate in philosophy is over some issue that can be traced back to Socrates’ disciple (5). Consequently, I’m probably not going out on much of a limb when I say that we should blame Plato for our current troubles.
To be sure, biologists and philosophers of biology have been doing that for a while. Ernst Mayr — perhaps the greatest popularizer of Darwinian theory in the twentieth century — claimed that “typological essentialism,” attributable to Plato by way of Aristotle (6), has been our greatest obstacle to general acceptance of evolutionary theory. David Hull, who wrote a vitally important paper entitled “The Effect of Essentialism on Taxonomy: Two Thousand Years of Stasis,” apparently agreed.
But this is a red herring in the current debate. As far as I can tell, no one involved in the Tennessee legislation is concerned with the metaphysical status of abstract forms, or the utility of type specimens in biological classification. No: this a debate about belief and knowledge, and if there is any debate in philosophy that really isn’t anything more than a series of footnotes to Plato, it’s this one.
Ask philosophers what is meant by the term “knowledge,” and by far the most common definition you’ll get is “justified true belief.” Some will hem, others will haw, and a few will point at something behind you and run away when you turn to look, but “JTB theory” has been philosophers’ starting point in epistemology (i.e., the study of knowledge) for more than 2500 years (7). Its origin: the Platonic dialogue “Meno,” wherein Socrates seeks the distinction between knowledge and other sorts of belief. If one comes to hold a belief in some true proposition in a justifiable way (e.g., through logical proof), then that belief qualifies as knowledge; everything else — all those potentially unjustified or untrue beliefs — is “mere opinion.”
This is our inheritance in epistemology: a choice between knowledge and the sort of thing your thirteen-year-old cousin posts in a Facebook status update. If one isn’t talking out of her brain, so to speak, then she must be talking out of her butt. Our culture generally hasn’t taken to heart the precise details of Plato’s version of JTB theory: most importantly, Plato argued that sensory input cannot qualify as knowledge, where most of us are perfectly happy to accept sense experience as known (8). Nevertheless, the brain/butt dichotomy is deeply ingrained in the way we regard knowledge claims. I hear it from my students all the time: anything that isn’t demonstrably true is liable to be dismissed as “just an opinion.” You can test this at home by answering the following question: if I hold a belief that hasn’t been proved true, then what is that belief? Quod erat demonstrandum.
Back to the debate over evolution: partisans such as those supporting the Tennessee bill have gotten a lot of mileage out of the old “evolution is just a theory” canard, and I don’t think it’s coincidental that the claim is only a small mental hop away from “it’s just an opinion.” The layperson is left with the intuitive sense that belief in evolution is either unjustified or untrue, and this intuition is strengthened by overemphasis of the “controversy” over evolutionary theory. The (distinctly American) reticence to accept evolutionary theory need not have anything to do with religion, as is often assumed; rather, it’s that theoretical knowledge falls outside the realm of what Plato would have called knowledge, and so qualifies as what the layperson considers opinion.
Of course, there’s a mistake in this way of thinking. Plato’s dilemma is a false one. I’ve actually gone so far as to ban use of the word “opinion” in my classes in order to guard against this error (which always makes for some admittedly sadistic fun when I have students read from a translation of “Meno”). There is a third way, and we need to recognize that some beliefs can’t be known, but are more justifiable than mere opinion.
The third way, between knowledge and opinion, is science.
I won’t dwell too much on what science is or how science gets practiced. All we need to bear in mind is that scientific beliefs are justifiably held, but only provisionally believed to be true (if at all), and so mark a halfway point between knowledge and opinion. The scientist gathers data, then develops theories that explain those data. It’s sort of like a particularly twisted connect-the-dots picture (9). Data are like dots: they represent what’s given at the start. Theories are like the lines we draw between the dots: they fill in what we hope is the best picture that includes everything that’s given. The point is that the final picture is, at best, inferred from the only stuff we have to work with, i.e., the dots. So too are all scientific theories inferences from data.
This is what’s particularly insidious about legislation like that recently passed by the Tennessee senate: no good scientist or science teacher should ever deny that scientific theories ought to be questioned. No one ever sees, say, gravity; what we see are things falling down (on Earth, at least), and the “picture” we infer when we connect those “dots” is what we call gravity. This wouldn’t qualify as knowledge in the Platonic sense, since there may be other ways to justifiably connect the dots and so it’s possible that the theory may be untrue. If we remain beholden to Plato’s false dichotomy, all scientific theories are just opinions. Luckily, scientists don’t see the world in the binary brain/butt way. Between acceptance of knowledge and rejection of opinion, the scientist tests scientific beliefs, which is what “teach the controversy” legislation ostensibly endorses. If this were all the Tennessee senate intended, the bill would be completely uncontroversial — kind of like passing a law instructing math teachers to let their students use long division.
But that isn’t what they intend. While I’m optimistic enough to believe that the intention of legislation of this sort is not to undermine science generally, it’s clear that the goal of “teach the controversy” is to call into question a small number of politically inconvenient scientific theories (10). Thus a war for Darwin, but not for Newton.
Ultimately, this is what I take to be the weakness in the enemy’s offensive strategy. By accepting the way science works, even those who wish to teach the controversy implicitly endorse the inferential derivation of theory from data; that is, they would accept that the dots (data) are given, and that all controversy is over the inferred pictures (theories). If teaching the controversy is effective at all, it can only be effective at undermining acceptance of evolutionary theory — in particular, evolution by means of natural selection, as opposed to (say) evolution by means of inheritance of acquired characteristics. But that’s not our opposition’s target. They aim at evolution itself, and evolution is not theory, it’s data. Evolution is a sequence of fossils leading from Albertosaurus sarcophagus to Tyrannosaurus rex; it is the necessity of a new flu vaccine each year; it is the increase in average human height over the past two centuries; it is the phalanges in a whale’s flipper. These are given. Natural selection — the controversial part — is the theory. To deny that evolution happens because natural selection might be false would be like denying that things fall down because Newton was wrong.
So go ahead. Try to erase the picture. The dots will remain. As long as that’s the case, obscurantist forces have already lost the war, no matter how many small victories they may claim.
(1) These have been your coming attractions. Please enjoy our feature presentation!
(2) If you should need to bring yourself up to speed on State of Tennessee vs. John Thomas Scopes (1925), you can go here. Or you can read this. Or you can rent this. Or you can wait until this comes to town. My point is that you have options.
(3) Proponents argue that the purpose of the bill is simply to encourage critical reasoning skills in science. I teach critical reasoning. I know critical reasoning. This is not critical reasoning; if it were, there would be no need for a loaded appeal singling out “biological evolution, the chemical origins of life, global warming and human cloning” (who learns about human cloning in grade school, anyway?).
(4) Some of us have taken this quote to heart more than others. Personally, as a philosopher of biology, I prefer to think of myself as a footnote to Aristotle. It’s all moot: neither philosopher used footnotes, let alone any that mention me.
(5) A brief, tangentially-related tidbit: according to the Stoic philosopher Diogenes Laertius, Plato’s birth name was actually Aristocles; “Plato” was a wrestling name, literally translating as “broad.” I find this endlessly amusing, as it calls to mind an analogy with some philosophy student in the year 4500 having to slog through a textbook unit entitled “Hulkamania.”
(6) For those of you who find that last part baffling, given that the whole purpose of Aristotle’s “Metaphysics” seems to be dismantling Plato’s philosophy: don’t worry. Ernst Mayr was, in fact, bafflingly wrong on this point.
(8) Go ahead: raise your hand if you don’t believe that your hand exists. Good luck with that one.
(9) The dots aren’t numbered and there’s absolutely no indication of what the picture should look like from the outset. Knock yourselves out, kids.
(10) A quick Google search doesn’t turn up very much in the way of legislation challenging the Many Worlds interpretation of quantum mechanics. Perhaps the country is being run by modal realists.
Sunday, March 25, 2012
* Chapter 64 of the delightful Harry Potter & the Methods of Rationality, by E-who-must-not-be-named, is a funny collection of sketches for rationalist re-imaginings of literature & TV. My personal favorite, by Eneasz Brosky:
“Revenge?” said the peg-legged man. “On a whale? No, I decided I'd just get on with my life.”
* Givewell’s Holden Karnofsky weighs in on Kony 2012. I think he puts it beautifully. In case you’re wondering, donations to Against Malaria Foundation are tax deductible in the UK, USA & Canada.
* An extremely useful LessWrong comment thread, on money. The strongest objection to rationalism, in my mind, is something along the lines of: “If you’re so smart, why aren’t you rich?” Excuses for poor financial skills from people who value instrumental rationality start to wear thin after a while (in case it wasn’t obvious, I’m talking about myself).
* Thinking Physics, though rather old at this point, has got to be one of the most delightful books on physics out there. In addition, it is a miracle in pedagogy — that rare sort of book that is both elementary and advanced.
*It looks like in vitro meat might be on its way into our diets, though no doubt it will be something of a tough sell.
Thursday, March 22, 2012
A number of weeks ago cosmologist Sean Carroll posted a link on his Google+ stream to a recent paper published by Addy Pross in the Journal of Systems Chemistry. Since Sean’s comment about the paper was positive, I went and checked it out. Essentially, Pross argues that he has come up with a general theory of evolution that bridges biology and chemistry by reducing the former to the latter. The key conceptual element in the new theory is something called Dynamic Kinetic Stability (DKS), to which I will return in a minute. Sean briefly noted that he is generally sympathetic to attempts at extending the Darwinian framework to non-biological domains, as for instance Lee Smolin has done in physics with his idea of cosmological natural selection.
As a biologist I’m much less intrigued by, and indeed tend to be somewhat guarded against, this sort of thing. Moreover, as a philosopher I simply don’t buy Dan Dennett’s idea that “Darwinism” (which of course is not a scientific theory, but an ideological-philosophical position) is a “universal acid,” as expressed most famously in his eminently readable Darwin’s Dangerous Idea.
Perhaps the trouble started with Theodozius Dobzhansky, one of the fathers of modern evolutionary theory, who famously said that nothing makes sense in biology except in the light of evolution (the phrase is, in fact, approvingly quoted by Pross). Problem is, Dobzhansky was writing for an audience of science high school teachers, and his statement is patently wrong, as an even cursory examination of the history of biology makes clear. For instance, developmental biologists had done a lot of highly fruitful research throughout the 19th and 20th centuries even as they ignored Darwin. And molecular biologists made spectacular progress from the 1950’s though the onset of the 21st century, again pretty much completing ignoring evolution. This is not to say that evolutionary theory doesn’t help in understanding developmental and molecular systems, but it is a stretch of the record to make claims such as those of Dobzhansky. (It would be like saying, for instance, that nothing makes sense in physics except in the light of quantum mechanics. Plenty of things in physics make perfect sense even as one brackets quantum mechanics and considers it a background theory.)
Or perhaps the culprit is Richard Dawkins, who famously proposed the idea of memes in his 1976 popular work, The Selfish Gene. Indeed, Dawkins was looking for a way to universalize Darwinian principles, perennially dissatisfied with the emphasis put on contingency by some of his colleagues, most notably the late Stephen Jay Gould. As it turns out, memetics (warmly endorsed as a general theory of cultural evolution by Dennett) failed abysmally, as shown for instance by the premature closure of the Journal of Memetics, the only academic source of all things memetic.
Of course, Darwinian evolution is indeed applicable to some non biological systems, particularly to so-called genetic algorithms, a type of evolving computer program whose properties have been studied by computational scientists over the past few decades. Indeed, genetic algorithms mimic biological evolution so closely that a number of population geneticists I know have been annoyed by repeated claims of computer scientists to have discovered this or that principle describing such systems, apparently without realizing that many of those discoveries had already been made by theoretical population geneticists decades earlier.
But, back to Pross’s paper. His project is not exactly to extend Darwinian principles from biology to chemistry, thus accounting for the pre-biotic evolution of the chemical precursors of living organisms. Rather, he wishes to proceed the other way around and to subsume biological evolution as a particular instance of a chemically-based general theory. This makes sense within the always popular framework of theory reduction, though of course one would then immediately ask why not go all the way and attempt a quantum mechanical model of biological evolution (answer: because such an attempt would quickly begin to look ridiculous on epistemological grounds, if not on ontological ones).
As I mentioned above, Pross’s key idea is that of DKS. As the author puts it: “There is another kind of stability in nature that is actually achieved through change, rather than through lack of change. This stability kind is a dynamic stability,” and proceeds to give the example of a river, which maintains its general feature (stability) even though the actual water making up the river is always different (dynamic). The problem is, of course, that even if biological systems can be thought of as a type of DKS, this is surely not sufficient for an extended theory of evolution, as clearly some not evolving systems — like the above mentioned river — are also DKS.
Pross realizes that his proposal faces serious problems, not the least of which is that while DKS is supposed to be equivalent to fitness, it is exceedingly hard to actually measure DKS. Pross attempts to bypass the issue by asserting that biologists too have trouble measuring fitness, a statement that would surprise many biologists. (There are both conceptual and methodological issues with biological fitness, but nothing like what I gather being the case for DKS in chemistry.)
This is obviously not the place for an in-depth discussion of Pross’s paper (anyone out there interested in a PhD dissertation on various attempts to expand the domain of Darwinian evolution? Drop me a note...). The bottom line is that I am suspicious of theoretical approaches to biological evolution that don’t seem to take on board what the Darwinian theory actually says. As is well known, the best summary of what the latter consists of was given by Richard Lewontin, and it is still today an obligatory station for any serious discussion of “Darwinism.”
Lewontin was able to provide a highly formal and abstract rendition of the Darwinian theory, which boils down to the statement that a given system will evolve in Darwinian fashion if three conditions are met: 1) There is variation within populations of evolving entities; 2) The variants in question differ in their fitness (i.e., their ability to persist and spread); and 3) There is a system of inheritance that allows the next generation to increase the frequency of the successful (higher fitness) variants.
I suspect that Lewontin’s analysis simply doesn’t fit many of the proposed expansions of Darwinian theory. It works for computer algorithms, as I mentioned above, but not for memes (whatever they are), nor — likely — for DKS, and probably not for Smolin’s parallel universes. I guess I’m not bothered by this “failure” because I am happy with scientific theories having proper domains of application and because I’m somewhat suspicious of “theories of everything.” My pluralism about scientific theories is to be taken as epistemologically grounded, not as a deeper ontological statement. That is, I don’t know (nobody does, regardless of what they’ll tell you) whether the total reality of the universe can in principle be described by a single theory unifying all the special sciences. But I think it is pretty plain for everyone but the most dogged reductionist to see that in practice we will have to do with special theories for special purposes, probably forever. Where this leaves the question underlying Pross’s effort — how to bridge the gap between chemical and biological processes, thus explaining what life is — remains to be seen.
Friday, March 16, 2012
As you have probably already heard, the controversial public figure Andrew Breitbart died on March 1 of a heart attack. He was 43 years old.
The main line of business for Breitbart was news, in particular, web news. Early in his career, he helped Arianna Huffington start The Huffington Post. He then proceeded to found the web sites Breitbart TV, Big Hollywood, Big Government, Big Journalism, and Big Peace. Yet he is perhaps best well known for being an advocate and commentator who supported the Tea Party movement, railed against the Occupy movement, harshly criticized liberals, and was a major force behind several public stunts — most notably the ACORN video controversy and the resignation of U.S. Department of Agriculture employee Shirley Sherrod. You can read more about Breitbart’s exploits here.
You might recall that I wrote about the last of those exploits, Sherrod’s resignation, on this blog. To refresh your memory: Breitbart posted on his website an edited clip of a longer speech by Sherrod that, without context located elsewhere in the talk, made her look racist. This led to her forced resignation by then Secretary of Agriculture Tom Vilsack. In my article, published August 2, 2010, I argued that while Breitbart was responsible for Sherrod’s downfall, a host of other actors — conservative pundits, lawmakers such as Vilsack, and the American public — were also to blame for blindly accepting the claims or bowing to the commands of a man who, for all intents and purposes, had made a career out of smearing his opponents and enemies.
So, it surprised a couple of my friends to learn that I think America — if not the world — will be better off without Breitbart. Given the arguments laid out in my previous article, outlining that Breitbart wasn’t nearly as bad as many liberals made him out to be, how could I possibly believe that? Allow me to explain.
One of the first things I read after Breitbart’s death was an article in the Rolling Stone by Matt Taibbi, titled “Death of a Douche.” As you can imagine, the article was not entirely kind to Breitbart. I say “not entirely” because Taibbi does give Breitbart some credit. Then again, Breitbart would have expected, and perhaps accepted, nothing less than Taibbi’s treatment. Remember that this was the man who called the late Sen. Ted Kennedy a “villain,” “big ass motherf@#$er,” “a duplicitous bastard, “a prick,” and “a special pile of human excrement” — all in the just three hours following his death.
In the days following Breitbart’s death, I posted Taibbi’s article on a couple of social media web sites, and passed it along to friends. As I said, I received pushback, which came in three kinds:
1. “You and others are giving Breitbart too much credit. He didn’t have that much power to do harm. He didn’t get Sherrod fired.”
2. “The guy had a wife and four kids. I heard he was a good father and husband, and a nice guy. Lay off.”
3. “How could you have rooted for anyone’s death? Or even be happy that he’s dead? Are you sick?”
Whether or not my friends had read my previous article on Breitbart’s exploits — some had, most hadn’t — they were throwing my own arguments back in my face. For reference, let’s go back and read what I wrote in that August 2010 essay:
“Let me be clear: this essay is neither pardoning the behavior of Breitbart … Rather, this essay argues that while people often find it convenient to blame the media, in this case Breitbart and FOX News, for social problems, they ought to realize that it is a social problem that feeds the media. That is, Breitbart and media outlets cannot be understood apart from the social and political context in which they exist. Why does Breitbart have the power he has? Why do people listen to Breitbart? Because they agree with him.”
“By blaming social problems on one man or one organization, we thus ignore the social reality that these men and organizations are backed by millions of Americans, and make the problem out to be much simpler than it really is. They would not exist in such powerful roles without the support of a sizable number of people.”
Here is where things get sticky:
“Contrary to what many would tell you, Breitbart and FOX News did not create the Tea Party and the extreme Right which wants to disable Obama and his administration in any and every way possible. Instead of blaming them for creating social problems, we ought to consider the complex and numerous factors that influence what we see represented and supported in the media, and ponder how much of an effort we’ve made in the battle against that with which we disagree. Anything less would wrongly simplify our problems and let everyone off the hook too easily.”
In hindsight, I admit that I used poor wording. In no way was I trying to excuse Breitbart and others for their actions. In many cases — certainly in the Sherrod case — Breitbart lied to or intentionally misled people to achieve his own goals. Without Breitbart’s actions, Shirley Sherrod would probably still have her job.
My central point was that Breitbart was not the only person responsible. He can’t create a controversy, or for that matter an entire movement, without some help. Many sections of the public were willing to be misled, perhaps because of their lack of skepticism or their prior beliefs about racism, big government, etc. This does not absolve Breitbart of his sins, it merely spreads the blame around. I hope this is now clear.
The other two points are not necessarily related to my previous article, but they are worth considering for a moment.
Regarding the second point: is there a rule written somewhere regarding how soon after a person’s death the public can discuss his or her merits (and demerits)? Is there a rule that only neutral things, or even good things are to be said, because that person’s family members or friends might be reading or watching? I certainly don’t recall these rules being in effect for the worst dictators of human history. I don’t recall them being in effect for even lesser evils, such as convicted mass murderers and terrorists. Do you recall hearing anything like: “Hey, why are you saying so many bad things about Timothy McVeigh?! He just died! And he had a family … and friends. Give it a couple of years. Stop being so mean.” I am going to guess not.
If McVeigh doesn’t work for you, try substituting Osama Bin Laden or Joseph Stalin (I’ve purposely not mentioned you know who). I hope you see my point. Sure, the people in question might have treated their significant others and friends well, and even been nice guys in private. But their actions had disastrous — or, rather, deadly — consequences for hundreds, thousands, even millions of people.
I am not comparing Breitbart to McVeigh, Stalin, or Bin Laden, who were explicitly murderous, but the facts are clear: Breitbart made a living by issuing intentionally misleading and/or inflammatory statements and behaving in a provocative manner, with the goal of destroying those who disagreed with him. He succeeded often. His statements and actions were public, and as such are perfectly fit for debate and criticism. No one knocked on his family’s door, or sent them personal letters (in fact, I extend sympathy to his family). Taibbi’s article was written in Rolling Stone, a magazine that often features public debate. It did not criticize his personal life. It criticized the (I would say radical) beliefs and actions he placed in the public square for all to see and feel. And it did this in retrospect, given his death. So, where’s the problem?
Regarding the final point, I didn’t see anyone rooting for Breitbart’s death, although I wouldn’t be surprised to learn some had. That seems disturbing. I considered Breitbart toxic, but not so much as a murderous dictator. He wasn’t killing people; he just knew how to unethically manipulate public opinion.
That said, I still think the world is slightly better off without Andrew Breitbart and his work. I wanted him to be exposed and less valued in our society - not dead. But I am not going to miss him. Are you?
Tuesday, March 13, 2012
[We are pleased to welcome Greg, who most recently wrote on Rationally Speaking about dietary pet views, among our staff contributors. Welcome aboard, Greg!]
by Greg Linster
In microeconomics one of the assumptions (insert your favorite economist joke here) is that agents are rational, i.e, amongst other things, agents are utility maximizers. If real people actually behaved this way, though, they wouldn’t leave tips. Here’s why: when the check arrives the service has already been performed and either the diner gets to keep their money or they can give it to the server. What would the rational agent do? Certainly she wouldn’t just give money away that she could keep for herself — after-all, money provides a store of value and she’s a utility maximizer!
Some people object to this claim on legal grounds. While one certainly has a legal obligation to pay the bill, there is, however, no such legal requirement to leave a tip. Gauche as it might be, it’s not illegal to pay your bill at a restaurant and then leave without tipping. Therefore, economic theory tells us that a rational agent shouldn’t leave a tip at a restaurant, yet I’ll venture a guess that most people actually leave quite generous tips. How can this be?
First of all, I suspect that some rational people still tip because they feel compelled to by the non-financial benefits that come with following social norms. A failure to tip could cause one to be deemed as cheap or stingy by family, friends, or colleagues. Another reason might be because the diner frequents a restaurant often and wants good service the next time they come back. I think many people simply value these types of non-financial benefits more than they do the financial cost of the tip.
So that may explain why some people tip, but it’s still not a sufficient explanation. Here’s a little thought experiment: let’s suppose you were on a business trip in a place you’ll likely never visit again, say, Fargo, ND. Furthermore, let’s suppose you have just finished a meal at a restaurant by yourself and have just paid the bill. Here, then, is the question: would you leave a tip with no social capital at stake? If you would still leave a tip in this situation then you might not be as rational as economic theory portrays you.
Essentially, I think the cultural practice of tipping allows restaurant owners to unfairly transfer risk to the servers. In other words, a stingy, but legally compliant customer can harm the servers’ bottom line, but not the restaurants’. Additionally, there is a lot of cultural ambiguity when it comes to situations in which we are supposed to tip (including situations outside of restaurants). Foreign diners, for example, may be unaware that they are supposed to tip in the United States. What I want to argue is that both consumers and servers would benefit from abolishing the cultural practice of tipping. Abolishing tipping will help servers be exposed to less risk and it would alleviate much of the confusion about situations that do and don’t warrant tips.
For the sake of making this a dialectical argument, let’s examine some of the reasons why the cultural practice of tipping might be a positive thing. First off, servers may claim (in fact I’ve heard some friends say this) that while some customers may stiff them, others may tip very graciously. If that’s the case, I think we need to figure out the net effect. Also, it’s often believed that servers wouldn’t perform their services if there were no possibility to make a tip, but that can’t possibly be true. Many service personnel (e.g., grocery baggers and auto mechanics) adequately fulfill their duties without the presence of a tip, so I’m quite perplexed by this argument.
In line with that last point, I’ve become increasingly interested in the following question: why is there a cultural norm to tip servers at restaurants, bellhops, and taxi drivers, but not clerks at the grocery? How do these norms develop in the first place? Strangely, when I visit the grocery store my groceries usually get bagged quite well. And my car mechanic, who arguably has a much more important job in terms of protecting my safety than does a server at a restaurant, does not accept tips. He does his job astoundingly well too.
I think the explanation for this problem is rather simple, the true cost of the service, in this case bagging groceries or car maintenance, is actually reflected in the explicit price, which I conveniently know upfront. If the wages for these professions weren’t fair, people wouldn’t do the work. Shouldn’t the same thing be true of servers in a restaurant?
Let’s say that restaurant owners raised their prices by 20 percent and paid their servers appropriately according to the market signals. Servers would, then, be paid a wage that reflects the true cost of their service and skills, just like grocery baggers and auto mechanics. The restaurant owner would simply reflect this additional cost in the menu prices. The $10 meal would now cost $12.
While it’s unlikely that the irrational practice of tipping will become antiquated any time soon, I hope that I’ve demonstrated that it is, at least theoretically, problematic and that the arguments for keeping it are rather weak. Irrational as it may be, I will continue to tip quite generously until something does change.
Monday, March 12, 2012
Julia and Massimo question him on this point, and ask him for his thoughts on what can be done to improve scientific literacy. As the founder of the Center for News Literacy and the Center for Communicating Science, Schneider has plenty of thoughts to share -- including making scientists take improv classes. Should science communication involve more storytelling? And is there any way to take advantage of new, online media formats to remedy some of the weak points in the science communication process?
Sunday, March 11, 2012
Friday, March 09, 2012
Albert Camus famously wrote that “There is but one truly serious philosophical problem and that is suicide.” It’s more than a slight exaggeration (well, there’s existentialism for ya!), but the phrase came to mind during a recent evening of web surfing, when I found myself reading a brief essay on the ethics of suicide published by a New Zealand site providing “resources for life related issues” (of which self-imposed death clearly is one). There is, of course, a long philosophical tradition of discussions about the ethical permissibility of suicide (Plato and Aristotle: against it; the Roman Stoics: in favor; pretty much everyone in the Christian tradition: against it; pretty much every Enlightenment philosopher: in favor). But what struck me as particularly interesting is the very different takes on suicide of two of the most influential philosophers all of time: David Hume and Immanuel Kant. (For a more comprehensive philosophical look at the issue see the entry in the ever more excellent Stanford Encyclopedia of Philosophy.)
Interestingly, both philosophers approached it from a deontological (duty-based) perspective. For Kant, predictably, the verdict is negative: suicide is not morally permissible on the general grounds that it diminishes the intrinsic worth of a human being to the status of an animal. I’m not sure what to make of this, since I don’t think human beings have any “intrinsic” worth. Whatever worth we have we acquire through our social interactions, and I think I could name a fair number of human beings who are less “intrinsically worthy” than your dog.
Kant considers a common argument in favor of the morality of suicide, that it may be permissible on grounds of personal freedom, since there is no (or relatively little, of an emotional type) harm to third parties. This holds no water with Kant, because he thinks that self-preservation is the highest duty we have to ourselves. Now, it is easy to argue that self-preservation is a biological imperative, but of course this does not at all translate into a duty. The problem with Kant’s general approach is that it is not clear toward whom we have such duties, other than a vague — and metaphysically highly questionable — “universal moral law.” Indeed, a reasonable retort is that a human being ought to have the freedom to decide when pain (physical or psychological) is sufficiently unendurable that one is better off terminating one’s life (let us bracket for a moment the possibility of a mental illness, to which I will return below).
Kant also proposes what appears to be a pragmatic objection to suicide, impugning the moral character of the person who contemplates it: “He who does not respect his life even in principle cannot be restrained from the most dreadful vices.” Needless to say, there is no evidence whatsoever that this is the case, and a pragmatic argument that flies in the face of facts doesn’t represent much of a promising avenue.
Hume, on the contrary, thinks that suicide is morally permissible, also on the grounds of his analysis of duties. He talks about three types of duties: to god, to ourselves, and to others. I will skip the first category, since I don’t think there are any gods toward whom we have any duties.
In terms of duties to others, Hume claims that in committing suicide we do not harm others (again, with the partial exception of the distress we may cause to loved ones). However, we also — by necessity — cease to do any good for society, which may present a problem. Hume’s response here is that our duties to society are in proportion to the benefits we receive from society (a form of pragmatic reciprocal altruism, if you will), and since we do not receive any benefits from society after we die (obviously), it follows that we do not have any duties toward it either. More broadly, in Hume’s words, “I am not obliged to do a small good for society at the expense of a great harm to myself.”
What about duties to ourselves? Hume claims that “we have such a strong natural fear of death, which requires an equally strong motive to overcome that fear,” meaning that we do not contemplate the extreme measure of suicide lightly. And the latter — to decide to terminate our life only under extreme circumstances of duress — is the only duty we may possibly have toward ourselves.
It should be clear that I find Hume’s arguments much more persuasive than Kant’s, but the question remains of the permissibility of intervention to dissuade someone from committing suicide. This is often framed in terms of the rights of autonomous moral agents vs the positive role of a certain degree of “paternalism” (a term that should not carry an automatic negative connotation, as it too often does) on the part of society.
Clearly, if there are signs of mental illness, then intervention by friends, relatives and professionals is warranted. But one can also conceive of plenty of situations — such as chronic and unendurable pain, deep but not pathological psychological distress (for instance by an old person who lost a lifelong companion), and terminal physical illness (particularly of a progressively debilitating type), where one would have to start wondering about the motives of people who allegedly wish to help.
The basic question ought to be, it seems to me, what is the best long term interest of the agent? Most of the time this will be to live as long (and as healthily) as possible. But when this is not the case — as judged by the agent himself, provided that he is capable of sound judgment — then the duty of friends, relatives and professionals switches toward understanding, moral support, and even (at least when allowed by the law) actual material assistance.
Of course, a good number of movies have explored the dimension of assisted suicide, and I particularly remember The Barbarian Invasions, a 2003 French movie set in Montreal, chronicling the last days of a terminally ill man who decides to die on his own terms, surrounded by friends and family. It is hard to imagine a better way to go, and even harder to conceive of a reasonable and compassionate objection to this kind of affirmation of moral autonomy.
Monday, March 05, 2012
A little while back I tackled the perennial question of whether, and in what sense, philosophy makes progress. But that was by means of a fictional dialogue between two robots, part of my “5-minute Philosopher” series, and it’s time to revisit the topic. The occasion has been provided by a lively meetup discussion I facilitated a few weeks ago, based on an article by Toni Vogel Carey that appeared in Philosophy Now magazine.
Carey sets up the discussion by arguing that philosophy stands somewhere between science and the arts, where the first one is the common paragon of a cumulatively progressive enterprise, while within the realm of the latter the whole idea of progress appears to be ridiculous. Although there is much that I agree with in Carey’s article, this set-up strikes me as questionable, particularly because the author counts mathematics as a science. Math is certainly useful to science (and so is logic and, sometimes, even art!), but it ain’t the same thing as science. The latter is concerned with empirically based hypothesis testing, while math makes progress more like logic (a branch of philosophy!), i.e. by a deductive exploration of the consequences of sets of axioms (in logic and philosophy these are called assumptions). So math and logic represent fields clearly characterized by cumulative progress which are not science, thereby undermining the idea that science is the paragon for progressive intellectual enterprises.
Moreover, some of my fellow meetupers even questioned the idea that art doesn’t progress. Yes, as Nobel biologist Francois Jacob (cited by Carey) said, “Beethoven did not surpass Bach in the way that Einstein surpassed Newton,” but the key qualification here is in the (same) way. Beethoven explored ways of composing hitherto unknown to musicians, which has to count as progress in a meaningful (though obviously not scientific) sense of the term. I pointed out during that evening’s discussion that the invention of perspective in Renaissance painting also was an unquestionable case of progress in art, as it made possible painting in ways that were simply not available before. I’m sure other examples can be easily found, especially by historians of music and art.
The heart of Carey’s article, however, concerns three general types of progress in philosophy, each accompanied by an example. The first one is what the author refers to as “progress as destruction.” A lot of what goes on in philosophical research is showing that someone else got it wrong, thereby moving the debate onto higher ground in logical space, so to speak. Carey’s example is Edmund Gettier’s famous demonstration that Plato was wrong when he defined knowledge as “justified true belief.” Gettier did this in a very short paper, using counterexamples. The one Carey provides is actually clearer than the one originally presented by Gettier. Imagine you were watching the final of the US Open a few years back and saw John McEnroe win the match point against Jimmy Connors. Assume further that it is indeed true that McEnroe won the Open that year. Apparently, you have a belief that is both true (McEnroe did win) and justified (you saw the final play). But it turns out that — because of a technical glitch — you actually saw a replay of a similar match point that had allowed McEnroe to beat Connors the year before! Gettier would argue that you have formed a belief that is both true and justified, and yet does not amount to knowledge. Now, put away the discussion of how one could fix Plato’s definition (no one has succeeded so far), because we need to proceed to Carey’s second type of philosophical progress.
This is progress understood as clarification, the sort of thing that Wittgenstein (himself not exactly a shining example of clarity) was presumably thinking of when he said that “Philosophy is a battle against the bewitchment of our intelligence by means of language.” The idea is that philosophers understand certain issues better when they can analytically parse distinct meanings or applications of given concepts. Carey’s example is John Rawls’s analysis of rules within the context of rule- (as opposed to act-) utilitarianism. Rawls distinguished “summary” and “practice” concepts of rules, where the first one works as a heuristic that summarizes past decisions, while the latter examines particular cases of application of a given rule. Without getting into details, Rawls’ approach helped to make sense of the advantages of rule-utilitarianism over act-utilitarianism, at the same time that it also made clear that rule-utilitarianism is barely utilitarianism at all, and falls uncomfortably close to its chief rival, deontology (i.e., rule-based ethics).
The third and last situation considered by Carey is “progress as doubt,” in which philosophers provide a needed counter to over-enthusiastic practitioners of their own and of other disciplines (e.g., science), by pointing out just how much we really don’t know. Here David Hume’s famous problem of induction comes to mind. Hume argued very effectively that induction — on which much everyday reasoning and especially scientific inference are based — cannot be logically justified on independent grounds. (If you think you can get out of this by arguing something along the lines of “induction works” think again: that would be invoking inductive reasoning to support inductive reasoning, and you’d be open to one of the worst charges in philosophical reasoning, that of circularity.) One cannot avoid but think of Socrates, and of the Delphi Oracle’s statement that he was the wisest man in all of Greece, apparently on the basis that he knew that he didn’t know much.
There are certainly other examples one could line up following Carey’s approach. Quine’s criticism of the previously universally accepted distinction between synthetic and analytic statements; Popper’s proposal that scientific hypotheses have to be falsifiable, followed by a Duhem-Quine inspired argument showing that falsification doesn’t work; the increasing sophistication of different versions of utilitarian ethics (from Bentham to Mill to Singer); the various moves and counter-moves in the debate in philosophy of science between realists and anti-realists; and so on.
What all of these modes and examples of progress in philosophy have in common is that they use analysis to parse and explore the logical space in which philosophical discourse exists. One begins with a given set of assumptions and works out their implications, until someone points out a problem with some of those implications which requires either the addition of other postulates or the abandonment of the initial one and their replacement by another set that may work out better. In this sense, philosophical analysis, again, is much more similar to mathematics than to science, and the discipline of logic represents a great example of it, both because it is a branch of philosophy that has clearly made progress, and because it can be said to actually include mathematics, at least in the sense that math is also about the application of deductive reasoning to uncover the properties of systems of axioms. That said, of course, I do not expect my colleagues in the math department to move in with us, though they would certainly be welcome...
Thursday, March 01, 2012
For some time now I have been conceding — on this blog, on my podcast, and in informal conversations — that vegetarians have the better moral (and health related) argument over most of the alternatives, with a couple of caveats. Why, then, have I kept behaving as an omnivore? Akrasia, Aristotle would say. It’s our innate weakness of the will that represents a major obstacle to human flourishing and a eudaimonic life.
Still, the inconsistency has been bothering me, despite the well known quote by
Walt Whitman: “Very well then I contradict myself, (I am large, I contain multitudes.)” Cute, but a lousy excuse for an inconsistent personal philosophy. Better to practice what I preach and engage in a bit of reflective equilibrium, the philosophical method by which we continually adjust our beliefs and practices because of reflection on other people's arguments and on the available facts.
The final straw that caused me to embrace a different philosophy of eating happened a few nights ago, when I was watching a 2005 advocacy piece called Earthlings. Directed by Shaun Monson, it presents a pretty brutal look at how we treat animals, not just in the sphere of food production, but also as pets, for the production of clothing, for entertaining, and for scientific research. Earthlings has a declared agenda, and not everything that is shown or said there should be taken as correct or fairly representative. Nonetheless, the piece simply translated into relentlessly disturbing images what I pretty much already knew to be the case, and had tried hard to ignore. Hence my resolution to do some reflecting and adjusting as soon as the movie was over.
Now, there are two major reasons to change your dietary habits: health and ethics. In terms of health, as Julia and I explained during the podcast episode, it turns out that vegetarians and people who eat fish and poultry have the best long term outcomes, followed by vegans and by red meat eaters, other things being equal (which they often aren’t, since vegetarians tend to take good care of themselves in general, thus making it a bit more complicated to disentangle the effects of diet per se from those of other relevant variables).
But my recent philosophical realignment has been motivated by ethics, not health practices. When it comes to the ethical domain, at the cost of simplifying things a bit, there are two issues pertinent to human use of animals: treatment and exploitation. To make the distinction clear, one could argue that keeping a pet dog or cat “exploits” them for the purposes of human companionship. Yet, most people — including yours truly — would not object to the practice as long as the animals in question are treated well (i.e., not abused, well fed, taken care of in terms of health, and even of their psychological needs). And of course domesticated animals have been bred by humans for precisely that purpose, so that one can even argue that it is in the best interest of those animals to be human pets, it aligns with their (modified) genetic instincts. To put it yet another way, the animals are getting something (a cozy, predator free and more healthy life than they would be able to pursue in the wild) in return for the companionship they provide.
An obvious objection to this line of argument is that the animals didn’t ask for this arrangement, and that the relationship is intrinsically asymmetrical. True on both counts, but we live with asymmetrical relationships all the time, for instance between employers and employees, or between parents and children (and, needless to say, children didn’t ask to be born either). Moreover, animals are simply not on the same cognitive level as humans, which means that we are the ones who have to take into consideration both our own and the animals’ interests as far as it is possible. If that smacks of paternalism, just remember that that’s precisely what you do with your children. (Yes, I know that the goal with children is different, since they will grow up and eventually become autonomous agents, though even that’s not true in the case of severely mentally or emotionally deficient ones.)
This distinction between treatment and exploitation, I suspect, is also at the root of some differences among vegetarians themselves: vegans, for instance, make the argument that eating eggs and dairy products is unethical on the grounds that they are derived by exploiting animals. Presumably, ovo-lacto-vegetarians do not find this argument entirely convincing. Indeed, the latter seem to be drawing the line at treatment, not use: they will eat cheese, milk and eggs as long as the animals are not subjected to artificial hormonal treatments and are given a reasonably healthy diet and life style (e.g., free ranging chicken and cows).
The treatment-exploitation divide, then, also helps us make sense of why some vegetarians think it is okay to use, say, horses for races, or a range of animals for transportation of people or goods. They may see these activities as relatively benign as long as the animals are well treated, as each party (again, asymmetrically) gets something out of the symbiosis. For instance, horse racing may be acceptable on the condition that the horses are well taken care of, while a rodeo is could well be unacceptable because the animals are usually abused before and during the performance. (I do admit that there are plenty of grey areas here, but I think the general picture holds.)
If I am okay with using animals, including possibly as food, as long as the good treatment criterion holds, what sort of diet should I then follow? At a minimum, a vegetarian diet (as opposed to vegan), if I take care to check that my eggs and dairy products come from free ranging animals. Indeed, one can consistently (from an ethical perspective) go further and include some meat, beginning with fish, as long as it is not the result of the type of large scale industrial practices that are so horrifically depicted in Earthlings (and as long as one also doesn’t run into environmental problems, such as the possibility of over exploitation of fisheries leading to the near extinction of some species).
If the above makes sense, or is at least more coherent than my previous fundamentally omnivorous attitude, then in practice I would have to make vegetables and fruits the larger base of my diet, followed by eggs and dairy, if I'm reasonably sure of the benign treatment of the animals involved (possibly easier for an upper middle class person living in New York around the corner from a large Whole Foods store, more difficult for others), occasionally by poultry (again, assuming free ranging etc.), and by fish (once I check out the advisability of eating a particular species based on ecological criteria — for instance using the excellent iPhone / Android app out out by the Monterey Aquarium). Pretty much all red meat will be out, and so too will be poultry, fish, eggs and dairy in the many cases in which I will not be able to ascertain that my minimal conditions for humane treatment have been met. To complicate things further, I have decided that there simply is no justification for eating animals that are capable of sophisticated cognitive processes, which includes humans — there goes my chance for cannibalism — whales, dolphins and, alas, squid and octopi. Oh well.
So, this is where the most recent round of reflective equilibrium has led me. I'm sure there is room for improvement, so by all means, take aim with your comments.