About Rationally Speaking
Rationally Speaking is a blog maintained by Prof. Massimo Pigliucci, a philosopher at the City University of New York. The blog reflects the Enlightenment figure Marquis de Condorcet's idea of what a public intellectual (yes, we know, that's such a bad word) ought to be: someone who devotes himself to "the tracking down of prejudices in the hiding places where priests, the schools, the government, and all long-established institutions had gathered and protected them." You're welcome. Please notice that the contents of this blog can be reprinted under the standard Creative Commons license.
Thursday, May 31, 2012
How do you know what you think you know? What counts as knowledge and what doesn’t? These questions speak to a great semantics-based problem, i.e., trying to define what ‘knowledge’ is. Studying the nature of knowledge falls within the domain of a branch of philosophy called epistemology, which happens largely to be the subject matter of David Weinberger’s book Too Big Too Know.
According to Weinberger, most of us tend to think that there are certain individuals — called experts — who are knowledgeable about a certain topic and actually possess knowledge of it. Their knowledge and expertise is thought to be derived from their ability to correctly interpret facts, often through some theoretical lens. Today, like facts, experts too have become ubiquitous. It seems we are actually drowning in a world with too many experts and too many facts, or at least an inability to pick out the true experts and the important facts.
Most of us are appalled, for instance, when we hear the facts about how many people are living in poverty in the United States. However, these facts can be misleading and most people don’t have enough time to think critically about the facts that are hurled at them every day. There might in fact be “X” amount of people living in poverty in the United States, but did you know that someone with a net-worth north of one million dollars can technically be living in poverty? How the government defines poverty is very different than the connotation that many of us have of that word. The amount of income you have is the sole factor used to determine if one is “living in poverty,” but this bit of information seldom accompanies the facts about how many people are “living in poverty.”
I recently posed a question on Facebook asking my subscribers if a fact could be false. To my surprise, there was much disagreement over this seemingly simple question. Weinberger reminds us that facts were once thought to be the antidote to disagreement, but it seems that the more facts are available to us, the more disagreements we seem to have, even if they are meta-factual.
It’s unquestionable that today’s digitally literate class of people have more facts at their fingertips than they know what to do with. Is this, however, leading us any closer to Truth? Well, not necessarily. This is because not all facts are created equal, and not all facts are necessarily true. Facts are statements about objective reality that we believe are true. However, while a fact can be false, truth is such regardless of our interpretation of it — we can know facts, but we can’t necessarily know Truth.
In the book, Weinberger draws an important distinction between classic facts and networked facts. The late U.S. Senator Daniel Patrick Moynihan famously said: “Everyone is entitled to his own opinions, but not to his own facts.” What he meant by that was that facts (what Weinberger calls classic facts) were thought to give us a way of settling our disagreements. Networked facts, however, open up into a network of disagreement depending on the context in which they are interpreted. “We have more facts than ever before,” writes Weinberger, “so we can see more convincingly than ever before that facts are not doing the job we hired them for.” This seems to be true even amongst people who use a similar framework and methodology for arriving at their beliefs (e.g., scientists).
One of Weinberger’s central arguments is that the Digital Revolution has allowed us to create a new understanding of what knowledge is and where it resides. Essentially, he claims that the age of experts is over, the facts are no longer the facts (in the classical sense), and knowledge actually resides in our networks. While this is an interesting idea, I’m not sure it’s entirely true.
Knowledge is a strange thing since it depends on the human mind in order to exist. I have a stack of books sitting on my desk, but I don’t point to them and say there is a stack of knowledge sitting on my desk. I simply don’t yet know if there is any knowledge to be gleaned from those books. For this reason, I don’t think knowledge can exist on networks either. Knowledge requires human cognition in order to exist, which means that it only exists in experience, thus giving it this strange ephemeral characteristic. I cannot unload my knowledge and store it anywhere, then retrieve it at a later date. It simply ceases to exist outside of my ability to cognize it.
Knowledge, Weinberger argues, exists in the networks we create, free of cultural and theoretical interpretations. It seems that he is expanding on an idea from Marshall McLuhan, who famously said, “The medium is the message.” Is it possible, then, that knowledge is the medium? The way I interpret his argument, Weinberger seems to be claiming that the medium also shapes what counts as knowledge. Or, as he himself puts it, “transform the medium by which we develop, preserve, and communicate knowledge, and we transform knowledge.” This definition of knowledge is, however, problematic if one agrees that knowledge can only exist in the mind of a human (or comparable) being. To imply that a unified body of knowledge exists “out there” in some objective way and that human cognition isn’t necessary for it to exist undermines any value the term has historically had. Ultimately, I don’t agree with Weinberger’s McLuhanesque interpretation that knowledge has this protean characteristic.
In a recent essay in The Atlantic Nicholas Carr posed the question: “Is Google Making Us Stupid?” His inquiry spawned a fury of questions pertaining to our intelligence and the Net. Although Weinberger has high hopes for what the Net can do for us, he isn’t necessarily overly optimistic either. In fact, he claims that it’s “incontestable that this is a great time to be stupid” too. The debate over whether the Internet makes us smarter or dumber seems silly to me, though. I cannot help but conclude that it makes some people smarter and some people dumber — it all depends on how it is used. Most of us (myself included) naturally like to conjugate in our digital echo chambers and rant about things we think we know (I suspect this is why my provocative “Who Wants to Maintain Clocks?” essay stirred up some controversy — most RS readers don’t usually hear these things in their echo chambers).
Weinberger also argues that having too much information isn’t a problem, but actually a good thing. Again, I disagree. In support of this claim, he piggybacks off of Clay Shirky, who tells us that the ills of information overload are simply filtering problems. I, however, don’t see filtering as a panacea because filtering still requires the valuable commodity of time. At some point, we have to spend more time filtering than we do learning. An aphorism by Nassim Taleb comes to mind: “To bankrupt a fool, give him information.”
Overall, Weinberger does a nice job of discussing the nature of knowledge in the Digital Age, even though I disagree with one of his main points that knowledge exists in a new networked milieu. The book is excellent in the sense that it encourages us to think deeply about the messy nature of epistemology — yes, that’s an opinion and not a fact!
Tuesday, May 29, 2012
I don’t get it. I don’t doubt that what SpaceX is doing, and what surely other commercial companies will soon follow suit in doing, is important, and yes, even historical. But I seriously doubt that it has much to do with space “exploration.” More likely, space exploitation. Don’t get me wrong: space is, to some extent, a resource for humankind, and it is perfectly reasonable for us to exploit it. And history has certainly shown that the best way to accomplish that sort of task is to hand it to the private sector (of course, that’s not at all without potentially extremely serious drawbacks in and of itself, but that’s another story). What history has also clearly shown is that basic science and exploration are best done by scientists who work without the constraints of financial interests, and these days this means government funding (in Galileo’s time it was the government too, but in the form of some rich nobel family running the city).
When Cain optimistically says that SpaceX is an organization with “both the willingness and resources to push outward into space” I really wonder on what he bases his judgment. Why should a private enterprise push the human frontier into outer space? It’s uncharted and dangerous territory, and that’s not what private companies are in the business of doing: the risk is high, the likely short-term gain low. Take the example of medical research here on earth. Yes, we can all point toward the occasional new breakthrough drug developed by a pharmaceutical company. But most of the basic research necessary for those applications is actually done with NSF and NIH funded grants, usually in government labs or universities. Which, again, actually represents an obvious and logical division of labor between private and public, or between the applied and academic worlds (yes, I know, no such distinction is really sharp and without nuance, but there still is a distinction).
I think the current enthusiasm over SpaceX and what will follow ought to be at the least tempered by a sober pondering of a sad fact: NASA is no longer in the business of doing much of relevance in space. Don’t take my word for it, consider instead what former astronaut Story Musgrave recently said: “COTS [the Commercial Orbital Transportation Services which made SpaceX’s Dragon mission possible] is a default program which spun out of failure ... What is the space vision today? Where is the visionary? We’re not going anywhere... There is no where, there is no what, and there is no when. ... There is no Mars program, none. There is also no Moon program. There is no asteroid program.” Not exactly an enthusiastic endorsement of where we are arguably un-boldly not going, is it?
Please understand, this isn’t a condemnation of capitalism or of private enterprise. Nor is it a naive endorsement of the marvels of government programs (I’m aware, for instance, that the “discovery” of the Americas was a government-financed exploitative enterprise, though that was a few centuries ago, and the government in question was an absolute monarchy). It is a simple worry that basic research done in the public interest is highly unlikely to be carried out by companies whose main (only?) concern is the bottom line. I do not doubt that the people working for SpaceX are genuinely interested in what they do, and maybe some of them even really think that it has to do with space exploration. If so, I wish them good luck. But when I saw the images of the various Space Shuttles being flown all over the country to be turned into permanent museum pieces I couldn’t help feeling sad about a pioneering age coming to a premature end. (Indeed, one could argue that the Shuttle itself was already not at all about exploration, but reflective of an inward-looking turn at NASA, more and more concentrated on what can be done in orbit around Earth than on going after the big prizes out there.)
I also think it interesting that I could find very little in the way of critical commentary even among my fellow skeptics (maybe I didn’t look hard enough: anyone out there have relevant links?). Seems like we ought to have a discussion about these issues, perhaps even start a grassroots push nudging Congress to re-finance and re-conceive NASA to work in parallel with SpaceX and other private enterprises — not in the subordinate fashion that is unfolding under our very eyes, but following the model of the relationship between academic and private research’s division of labor. My friend Neil deGrasse Tyson has recently argued that we need new bold space exploration initiatives so that the next generation can dream big. I can quibble with his optimism, and perhaps I will in a future post. But his vision of space exploration is far more enticing than the kind of mining and tourism that is likely to develop during the next few years.
Monday, May 28, 2012
* Philosophy Bites interviews Adina Roskies on the relevance (or lack thereof) of certain neuroscience results to free will.
* David MacKay’s fantastic book Sustainable Energy without the Hot Air is a really wonderful starting point for energy policy research; although it is a bit UK-specific, the techniques (he shows his work!) are all nicely adaptable to other countries provided you’re capable of a google search. Even better, for those who are allergic to paying for stuff, there is a free online version.
* Richard Yetter Chappell at Philosophy, et cetera takes on an intuitive objection to Peter Singer’s “drowning child” thought experiment.
* Trends in science fiction’s vision of the future, by Annalee Newitz, complete with interesting speculations on where the trends come from. It appears the future is currently receding from us.
* Optics Picture of the Day is a fantastic site, which not only presents eye candy from the world of optics, but explains it. Do not click if you intend to be productive for the next hour.
* Stephen Novella of NeuroLogica on cognitive decline with aging. Actionable: “risk factors include hypertension, obesity, smoking, a high saturated fat diet, and social isolation. Protective factors include physical activity, staying mentally active and socially engaged, moderate alcohol consumption, and vegetable and fish consumption.”
Saturday, May 26, 2012
Recently there has been a prominent public debate between Congressional Republicans and religious figures over the new federal budget authored by GOP Rep. Paul Ryan. In case you haven’t heard about this, or you’ve only given it slight attention, here’s a short rundown.
On March 20, Rep. Ryan proposed a budget that would drastically cut government spending by slashing social programs and lowering tax rates on corporations and the wealthy. Several faculty members at the Georgetown University soon condemned Ryan’s budget as immoral – inconsistent with Catholic teachings on ethics:
“We would be remiss in our duty to you and our students if we did not challenge your continuing misuse of Catholic teaching to defend a budget plan that decimates food programs for struggling families, radically weakens protections for the elderly and sick, and gives more tax breaks to the wealthiest few. … In short, your budget appears to reflect the values of your favorite philosopher, Ayn Rand, rather than the Gospel of Jesus Christ.”
The U.S. Conference of Catholic Bishops was also critical of Ryan’s budget, arguing that it conflicted with the tenets of his (Ryan is a Catholic) supposed religion.*
Ryan responded that, on the contrary, his plan would create the necessary economic growth to lift people out of poverty, as well as manage the government’s crippling debt:
“The Holy Father himself, Pope Benedict, has charged governments, communities and individuals running up high debt levels are ‘living at the expense of future generations, and living in untruth.’ … Our budget offers a better path consistent with the timeless principles of our nation’s founding and, frankly, consistent with how I understand my Catholic faith. … We put faith in people, not in government.”
Now that we’re up to date, let’s take a step back.
This is a familiar debate for anyone who pays attention to American politics. Politicians — along with other public officials and social figures — often use their religious beliefs to justify legislative action. Yet, once again, few people are stating the obvious: that it is completely inappropriate for a public policy debate to center on the religious reasons for or against a proposed law.
Before moving any further, let me state that I’m rather sick of hearing Jesus’ name mentioned in policy debates, if only because it is impossible to know what a person who lived several thousands years ago, and about whom very little is known, would have thought about specific political issues in the year 2012.
I fully admit here that the Catholic Church has, for once, taken a decent moral stance. But that’s not the point. While religiously based political efforts sometimes turn out well, and secular liberals ought to at least consider working with such groups on these issues, the religious method is susceptible to awful consequences (think: marriage equality, reproductive rights, stem cell research; the list goes on). The method is as important, if not more so, as the consequences.
Contrary to what many people think, secularism is not the atheistic position that religious belief has no place in society whatsoever. Secularism is the idea that you can believe what you would like, but your religious beliefs have no place in public policy debates. It asks that laws be based not on faith, which is private and accessible only to believers, but on reason and scientific evidence, which are public and accessible to all. This helps to ensure that our laws are as rational as possible and don’t harm people who practice a different faith, or no faith at all.
Some will counter here that religious views cannot be prevented from entering political discourse and lawmaking.** This is a point based on the simple observation that religious belief, as a matter of fact, is often used in policy debates. Yet that doesn’t mean we should encourage religious views in policy debates, or that we do not have any other option available to us.
I submit that it is simply unnecessary to call on one’s religious view, as there are plenty of secular moral reasons for (e.g., Rand-style argumentation) and against Ryan’s budget proposal.
As you might recall, I have previously argued on this blog that economic debates should include a strong ethical component:
“Economic thinking cannot be divorced from morality because one’s values determine which economic structure he or she prefers. There are no such things as purely economic ends divorced from all other ends because economic decisions are made based on moral values. They also have a moral impact on other people.”
My views on how this works in method mirror those of Massimo here at Rationally Speaking. First we figure out our foundational assumptions. For instance, what is the nature of human behavior and desires? How do humans act and interact? What should we value? How should we influence our culture so that it fosters those values? What are — or should be — our shared moral goals?
Then we assess which economic ideas and systems to employ so that our assumptions can be taken into account and that our goals can be realized. Economics is not just about studying and applying knowledge of trends, numbers, math, and business practices. It is also about taking into account the reality of human behavior and our moral concerns before making economic decisions — and then considering the moral consequences of those decisions.
So, is there a good moral (non religious) response to a specific situation such as Rep. Ryan’s budget?
As I’ve written before, I believe in a multi-faceted approach to morality. I believe we ought not harm other creatures capable of experience and agency. I believe people deserve certain rights and respect because of their existence, and that humans ought to help each other, where and when possible, to have a decent living situation. And I believe we ought to hold tight our duties, practice our obligations, and cultivate a virtuous moral character.
Unfortunately, Rep. Ryan's proposal severely slashes or essentially eliminates programs that help children, the poor, and the elderly. This is both unethical and ineffective. Ryan could have lifted tax breaks on corporations and the ultra-rich — both of which are making record profits — or cut the bloated defense budget. Instead, Paul is seeking to shrink governmental programs that have positive moral value and impact. If you want to solve our problems, do you really think it best to focus on privatizing and cutting health care and other social safety nets for the worst off in this country? Would it not be better to stop giving breaks to the wealthiest and most secure in order to improve programs that help many people lead a decent life?
That is why Ryan’s budget proposal is immoral. And my argument did not require reference to any religious figure or holy book.
One can reasonably argue that public policy ought not to be based on religious belief in any way, as it would necessarily favor religious views over non-religious views, or specific religious views over others. That clearly violates the Constitution and over sixty years of Supreme Court jurisprudence. But one can also reasonably argue that we need not consider religious beliefs because there are plenty of available secular arguments at hand to deploy for and against these policy debates.
Public policy should center on the secular, not the religious. And while that certainly won’t guarantee unfailingly rational government, it might bring us a little step closer to that lofty goal.
* On another note, this is an interesting intersection to ponder: when one’s religious or moral views conflict with one’s views on government, and vice versa. It’s an example of tension between conflicting values.
** Obviously many would argue that religious belief is a wonderful thing, and that Christianity is or should be the national religion, but I do not take up that argument here.
Thursday, May 24, 2012
The Stone is a philosophy blog published by the New York Times. The idea is to “feature the writing of contemporary philosophers on issues both timely and timeless.” It’s a good idea, and sometimes it even works nicely. Several colleagues have written excellent entries for The Stone, including Graham Priest on paradoxes; a thoughtful recent essay by Gary Gutting on the reliability (or lack thereof) of the social sciences; and one by Timothy Williamson about the neutrality (or not) of logic in philosophical discourse.
Then again, The Stone has had its share of really bad posts. Some of them are simply obscure and clearly not aimed where they should be, at the general public. Others are downright idiotic (sorry, but that’s the only appropriate word), like the recent musings by Michael Marder about the morality of eating peas, which Leonard has cut to shreds here at Rationally Speaking (we opened a can of peas to celebrate). This mixed record is highly unfortunate, because it is a rarity that the philosophical profession gets such high profile exposure, something that it needs now more than ever.
Anyway, when I heard that The Stone was beginning to publish a three-part series on the philosophical aspects of one of my favorite science fiction authors, Philip K. Dick, I was at once excited and worried. Turned out the latter feeling was more justified than the former.
The basic idea of the series is a good one: exploring the links between high quality sci-fi and philosophical themes, essentially treating sci-fi literature as a version of philosophical thought experiments. Hell, I even just taught a very successful course at CUNY on sci-fi and philosophy (using this text).
The Dick series has been penned by Simon Critchley, a continental philosopher who teaches at the New School in New York, and who some of my colleagues blame for the unevenness of the Stone blog. I do not know Critchley, though I have to admit at a standing position to concerned skepticism about continental philosophy in general (authors in that tradition often have good points and raise serious concerns, but more frequently than one would wish they waste them via obfuscatory writing, bad arguments, and hyped claims — just pick up any Derrida or Foucault to see what I mean). Still, I began to read Critchley’s analysis of Dick with much interest and high expectations. And the more I read the more my skepticism about the continental philosophical approach found itself validated. Let me give you an (extended) taste.
From part 1. Not much philosophically going on here. The entry is a set up for what’s to come, introducing readers to Dick’s work, but especially to the recently published “Exegesis,” a 950 page edited collection of the author’s notes to himself that were meant to explore and explain his “2-3-74” episode. This refers to a series of what can only be described as drug-induced mystical experiences that Dick had in February and March of ’74, and that are credited with spurring him to write like a madman until he died at age 53 in 1982 (he managed to pen an astonishing 45 novels and 121 short stories!).
Here was the first sign that Critchley was about to take an unnecessary turn in his analysis of Dick’s experiences and writing: “Could we now explain and explain away Dick’s revelatory experience by some better neuroscientific story about the brain? Perhaps.” No, not perhaps. At the very least “very likely,” and better yet “almost certainly.” And there is a difference between explaining and explaining away, with neuroscience being in the former, not the latter business. Of course, just because Dick (or anyone else) was under the influence of drugs when he had his moment of illumination doesn’t diminish at all the literary value of his writings. As for their philosophical value, well, hang on, we’ll get there.
I thought Critchley was getting back onto a reasonable track (there are several good ideas scattered throughout his posts, but see my comment above about Derrida and Foucault) when he wrote: “The book is the most extraordinary and extended act of self-interpretation, a seemingly endless thinking on the event of 2-3-74 that always seems to begin anew. Often dull, repetitive and given to bouts of massive paranoia, Exegesis also possesses many passages of genuine brilliance and is marked by an utter and utterly disarming sincerity.” This is a well balanced assessment of something of value that still suffers from major deficiencies and that, remember, was not meant to be a work of fiction. As Dick himself put it, “My exegesis, then, is an attempt to understand my own understanding.”
Things become philosophically interesting when we learn from Critchley that Dick made heavy use of two large sources of information he had at his disposal while writing Exegesis: the Encyclopedia Britannica (15th edition) and Paul Edwards’s Encyclopedia of Philosophy. Dick paid particular attention to Plato, Spinoza, Hegel, Schopenhauer, and Heidegger, among others (the full list clearly leans toward continental sensibilities, uh oh!).
From part 2. This is where Critchley begins to seriously veer off the road and where I felt increasingly hedgy, eventually deciding to write this commentary. He explores the possibility that Philip Dick can be read as a modern gnostic. The gnostics were an early Christian sect that held some pretty heretical (by Christian standards) ideas, including that knowledge was the way to salvation (ouch!) and that the world was created by a Demiurge — a sort of intermediary god responsible for evil.
Critchley tells us that Dick talks a lot, in his Exegesis, about the Logos (the Word), a concept that is not limited to Christianity, but goes back at least to Heraclitus. For Critchley, the core of Dick’s vision “is the mystical intellection, at its highest moment a fusion with a transmundane or alien God who is identified with logos and who can communicate with human beings in the form of a ray of light or, in Dick’s case, hallucinatory visions.” Okay. One could reasonably ask what this has to do with philosophy (as opposed to mysticism), but carry on.
Critchley again: “The novelty of Dick’s gnosticism is that the divine is alleged to communicate with us through information. This is a persistent theme in Dick, and he refers to the universe as information and even Christ as information.” I’m not even sure what this means. First off, of course X communicates with Y through information, how else would one communicate? Second, what constitutes information in this context? Third, and relatedly, what does it mean to say that Christ is information? And, most importantly, how the hell did Dick know any of this, as opposed to hallucinating it under the influence of drugs?
Critchley then gets into Dick’s “theory” of orthogonal time (please, shall we not be a bit more careful with the use of words? I mean, is this anything like, say, evolutionary theory? Quantum mechanics?). For Dick time contains “everything which was, just as grooves on an LP contain that part of the music which has already been played; they don’t disappear after the stylus tracks them.” Which leads Critchley to conclude that this is “like that seemingly endless final chord in the Beatles’ ‘A Day in the Life’ that gathers more and more momentum and musical complexity as it decays. In other words, orthogonal time permits total recall.” Oh crap. This is reasoning by poetic analogy, but not philosophy, as far as I can tell. And it is an analogy that is being deployed in order to explain a “theory” of time that is neither philosophical nor scientific.
I’m sorry but I have to quote Critchley extensively here, because I don’t want to be accused of paraphrasing him into a straw man: “[Dick] also claims that in orthogonal time the future falls back into and fulfills itself in the present. This is doubtless why Dick believed that his fiction was becoming truth, that the future was unfolding in his books. For example, if you think for a second about how the technologies of security in the contemporary world already seem to resemble the 2055 of ‘Minority Report’ more and more each day, maybe Dick has a point. Maybe he was writing the future.”
Or maybe he — like many other brilliant sci-fi writers — simply intuited one likely direction in which humanity may go, given the simple assumptions of a continued technological advancement and a pretty much constant (and base) human nature. And what on earth could it mean to say that Dick was writing the future? Was he creating it? Was he reading it in some sort of clairvoyant fashion? Or what??
Back to the (alleged) connection to gnosticism, Dick himself wrote: “So there is a secret within a secret. The Empire is a secret (its existence and its power; that it rules) and secondly the secret illegal Christians pitted against it. So the discovery of the secret illegal Christians instantly causes one to grasp that, if they exist illegally, something evil that is stronger is in power, right here!” Without rhyme or reason, Critchley interprets this mysterious “Empire” (of which, needless to say, there is no evidence) as the major corporations of the 21st century, ending his second essay with the paranoidal sounding: “[The secret is that] The world itself is a college of corporations linked together by money and serving only the interests of their business leaders and shareholders. The second secret — ‘a secret within a secret’ — belongs to those few who have swallowed the red pill, torn through the veil of Maya. In other words, they’ve seen the ‘matrix’ — a pop culture allusion that may lead us to some surprising, even alarming, contemporary implications of the gnostical worldview.” Wait, wait! Did the NYT readers get the reference to Maya? In what sense is the second secret inside the first one? How do we know any of the above? And perhaps most obviously, what was he (Critchley) smoking when he was writing this stuff? And is this really the way we want to present philosophy to the public? Ah, but there is more...
From part 3. In the final installment of the Dick series, Critchley is all over the place, and that place seldom has to do with anything written by Dick, as far as I can tell. Indeed, I suspect it has little to do with the actual theme of this Stone series, gnosticism (unless the term is taken to be so broad as to essentially lose meaning).
Critchley tells us that Dick’s take on the world was responsible for the so-called “dystopian” turn in science fiction that began in the 1960s (though, of course, dystopianism had been a very well established sub-genre all along, just think of H.G. Well’s The Time Machine). This “turn” consisted in embracing the idea, as Critchley puts it, “that reality is a pernicious illusion, a repressive and authoritarian matrix generated in a dream factory we need to tear down in order to see things aright and have access to the truth.”
Fine, but then — again with very little apparent connection to either Dick or actual gnosticism — the author simply cherry picks a number of examples from recent sci-fi movies, including, of course The Matrix, the awful Melancholia by Lars von Trier, and Avatar. Avatar? Seriously? I would think that any Philip Dick fan would throw up at the bare thought of the comparison, but whatever.
Crithcley now is on a roll, connecting more or less imaginary dots that go from Dick to Avatar to Rousseau and the Romantics, to traditional Christianity, to Goldman Sachs, to Nixon and Watergate, to Mitt Romney, and to Julianne Moore, to mention but a few. It’s a great example of continental philosophical writing, and I mean “great” in the most ironic sense possible.
Finally, Critchley asks us (rhetorically) what he seems to take as the crucial question to which his meanderings have inevitably led us: “what does one do in the face of a monistic all-consuming naturalism?” To which his answer is that there are two paths: either one embraces it and buys into whatever the latest prominent scientist says, or one rejects it in favor of “some version” of dualism.
What? Where? How? First off, it is entirely unclear what “monistic all-consuming naturalism” has to do with Dick and gnosticism. Second, the “irrepressible rise of a deterministic scientific worldview [that] threatens to invade and overtake all those areas of human activity that we associate with literature, culture, history, religion and the rest” is — depending on what exactly one means by that — either a very very recent push by ultra-atheists a la Dawkins-Harris-Rosenberg-(E.O.) Wilson-etc. or something that goes back at the very least to the Enlightenment. Oh, what the heck, perhaps it can be traced to the pre-Socratic atomists of ancient Greece, who knows. Lastly, what do these two choices (contrived, as even Critchley himself seems to admit at the very end of his third essay) have to do with philosopher Hans Jonas’ contention that the possessors of gnosis can either turn ascetic or libertine? And why bring in Jonas, other than because Critchley is Hans Jonas professor at the New School? Are you confused? Good, you should be.
I guess after this column I can kiss my chances goodbye of ever writing for The Stone (and they probably won’t pick any more Rationally Speaking entries as their suggested links, as they have done multiple times recently). But I had to point out that it is a shame that such a huge opportunity to bring philosophy to the general public through the leading newspaper on the planet is being wasted with silly posts on the morality of eating peas and obscure ramblings about gnosticism and the insights one gets about the world from heavy doses of sodium pentothal. Please bring back logic and arguments, and keep mysticism where it belongs, in the dustbin of intellectual discourse.
Wednesday, May 23, 2012
Massimo introduces some of the early philosophical approaches to this puzzle, and then he and Julia go over more recent scientific research on the issue (for example: does resisting temptation deplete your reserves of willpower, or does it strengthen your willpower "muscle"?).
They also examine possible solutions to the problem, including betting and precommitment, and online programs and mobile apps that can help.
Monday, May 21, 2012
Americans claim that education is one of their top concerns every time they are polled before an election. Yet, most of them don’t take education seriously at all. This is obvious from a series of elementary facts: a large portion of them thinks that creationism should be taught side by side with evolution; the public education system — one of the fundamental backbones of any democracy — has been under relentless attack and has been in the process of being systematically demolished for decades; and many American parents don’t really seem to think that “educator” is a real profession, since they are convinced that parents ought to dictate what is and is not taught to their kids, and how.
Enter New York Times columnist Thomas Friedman, who never misses a chance to write enthusiastically about new technologies and the flattening of the world economy — even when the technology in question is of doubtful use, or when it becomes clear that there are at least some down sides to globalization.
In his “Come the Revolution” piece, Friedman waxes poetical about Coursera, a new Silicon Valley backed company that will take “the next step” in the daring world of online so-called education. Friedman approvingly quotes Andrew Ng, associate professor of (surprise!) computer science at Stanford: “‘I normally teach 400 students,’ Ng explained, but last semester he taught 100,000 in an online course on machine learning. ‘To reach that many students before,’ he said, ‘I would have had to teach my normal Stanford class for 250 years.’” Except, of course, that lecturing to 100,000 students has precious little to do with “teaching.” Teaching — at its best — is an interactive experience between a necessarily small number of students and someone who has both knowledge to impart and an effective way of imparting it.
Lecturing — which is all one can do with 100,000 students — is just about the worst way to teach ever devised. It’s problem are well known in the pedagogical literature, which I guess would be too much to expect Friedman to have read before lecturing us on how to get a better, faster, cheaper education. To give one-way talks to an audience (which is what lecturing is about) is an effective way to communicate a large amount of information to a large number of people. But communication represents a small fraction of what teaching is about. Real teaching must include guided discussions, interactions among peers, and a great deal of exercises. The ideal model is that of the Renaissance workshop, where one learned from the Master and his best assistants, day by day. In modern education, this is what is done in the best graduate schools and when using the Montessori method. A far cry from what Friedman and Silicon Valley are proposing.
Showcasing his typically bombastic prose, Friedman boldly claims that “Big breakthroughs happen when what is suddenly possible meets what is desperately necessary. ... the need to provide low-cost, quality higher education is more acute than ever. ... getting a higher-education degree is more vital than ever.”
Right, except that low-cost and high-quality usually don’t go hand in hand, unfortunately, and particularly so when they involve unavoidable human labor (as opposed to mechanized one). Yes, there is no doubt that the cost of higher education has exploded at an indecent rate over the past several decades, rising even faster than health care (the other thing Americans claim to be on their list of top priorities, but about which they keep doing next to nothing). This has generated an absurd situation in which total student debt in the US is actually higher than total credit card debt. But the reasons for this sorry state of affairs are multiple and complex — and therefore don’t fit into Friedman’s hyper-optimistic view of the future (I think that’s a general malady affecting techno-optimists, but that’s another discussion).
Contra popular perception (especially among state legislators), the problem hasn’t been caused by over-inflated salaries of university professors who go home at 2pm to mow their lawns. Rather, education has been commodified and made into a for-profit enterprise (even public education), just like a lot of other things that shouldn’t necessarily be so treated. University administrations have become bloated and more powerful and self-serving, offering stratospheric salaries to people with Wall Street-like management skills even when they have no idea of what they are managing or, apparently, why. Not to mention that many state schools nowadays are only so-called, since states throughout the country have been slashing their higher education budgets while keeping their political power over universities’ boards of trustees.
And let’s look for a minute at that “getting a higher-education degree is more vital than ever” bit. This is increasingly true only because pre-college education has largely become a joke (again, for a variety of reasons, one of which surely is the wrong type of parental involvement in what and how teachers teach, and — again — the dearth of proper resources, including well trained teachers. Yes, misguided union policies are also part of the mix, but only part.). Let’s be frank, Friedman: a good number of college students should never have gotten to college, and they get out of it with a minimum ability in reading, comprehending and writing. But the answer to the problem isn’t to push for higher and higher degrees (a nation of PhD’s, anyone?). It’s to go back to square one and restructure elementary, middle and high school education so that our kids can enter not just the workforce, but society at large, with the necessary critical thinking tools. Ah, but that ain’t gonna be done by simply putting a bunch of lectures on line, is it?
Friedman is so enraptured by his own pindaric flights that he doesn’t seem to notice pretty glaring problems intrinsic in his dream scenario: “[Coursera will be] awarding certificates of completion of a course for under $100. (Sounds like a good deal. Tuition at the real-life Stanford is over $40,000 a year.)” Yeah, sounds like too much of a good deal, no? Does anyone really think that a Coursera certificate will represent anything like the sort of education that one gets at Stanford? (On the other side of the equation, does anyone really think that Stanford-type education is worth $40,000 a year?)
And here is another issue: “[Coursera] operates on the honor system but is building tools to reduce cheating.” The so-called honor system doesn’t work in an environment in which students are constantly told that they are in competition with the rest of the world, where the goal is not to further one’s education, but to beat the crap out of all your competitors — just like on Wall Street, or in a video game. At any rate, I have experience with teaching online courses and assigning online tests, and I can tell you that if you can devise a system to detect cheating, the cheating industry (yup, there is an industry for that, and it’s legal!) will design a way around your defenses, and so on in an escalating war that has increasingly less to do with anything resembling education. At any rate, I’d like to ask Prof. Ng what sort of assignments he gives to his 100,000 students. Certainly not long papers that he would have to carefully read and then thoughtfully comment on — that would really take him centuries. Unfortunately, the evidence does show that the only things that significantly improve student learning are courses that are reading and writing intensive. Oops.
Does Friedman see local, brick & mortar colleges heading for extinction? Not at all: “these top-quality learning platforms could enable budget-strained community colleges in America to ‘flip’ their classrooms. That is, download the world’s best lecturers on any subject and let their own professors concentrate on working face-to-face with students.” Setting aside the bizarre qualification of what Coursera and similar companies are doing as “top-quality,” Friedman here can be read in one of two ways: on the one hand, he may be suggesting that community colleges (but why not Stanford itself?) turn into glorified tutoring companies. I doubt that’s what he meant. On the other hand, he may be saying that the real business of colleges is to teach via personal, continuous, and varied interactions between students and teachers. That’s correct. We call that education, and it would help if people took it seriously.
Saturday, May 19, 2012
Philosophers often incur derisive sniggers at the idea that they can figure out the world “without leaving their armchair!” In some respects, I agree with that criticism. The idea of being able to reason a priori about the finitude of the universe or the existence of a god is pretty absurd.
But there is one tool in the philosophical toolkit that I absolutely believe allows you to learn about the world from the comfort of your favorite armchair: the thought experiment. In a regular experiment, you intervene in the world in some way, observe the results, and make an inference from those results about how the world works. Thought experiments use that same template, but the “data” you’re observing is the output of your own brain, and you use it to make inferences about the workings of a very specific part of the world: you.
This is one of the most useful tools I know for introspecting about my own motivations, values, emotional reactions, and preferences. I’ll start with a simple example, to illustrate the idea. Let’s say I’m considering applying to graduate school, but I’m reluctant, and I want to have more insight into the reasons for my reluctance. I might tell myself that it’s because I believe it wouldn’t be worth the time and difficulty, and that I didn’t expect to find a good enough job upon graduation to make that investment worth it. But I also suspect that I might be loath to apply because I’m afraid of getting rejected. How can I test which factor is really motivating me?
Well, what I would do in that case is hone in on one variable that I hypothesize might be important — e.g., my fear of rejection — try varying it, and observe how that changes my outcome variable (motivation to apply to grad school). There are plenty of ways I could vary “fear of rejection.” For example, I could imagine myself applying and knowing, somehow, that I would get in to all the schools to which I applied. That’s not very realistic, though. It’s hard to convincingly imagine something that would never happen in real life, and in real life, we can’t just “know” what’s going to happen ahead of time with certainty. It’s much better, when possible, to make your thought experiments believable so that your reactions to them constitute reliable data.
So instead, the thought experiment I would do in that case is to ask myself: “Imagine that your top choice grad school department invited you to come visit, and after interviewing with the professors there, they were so impressed with you that the head of the department told you flat-out: We want you here and we’re just going to skip the admission process with you. Let us know by tomorrow if you want to join our program.” Would I, in that scenario, still feel hesitant about going to grad school? If so, that result gives a lot of credence to the hypothesis that my hesitation stems from doubts about whether grad school is a worthwhile investment for me. If not, that result gives more credence to the hypothesis that my hesitation stems from a fear of rejection.
Here’s a related example of using thought experiments to figure out my own motivations. Once I was in graduate school, I had to figure out whether I really wanted to be a professor. I had a few hypotheses about my motivations, but to keep things simple, let’s focus on two. Hypothesis 1: I want to be a professor because I predict that I will really enjoy research and teaching. Hypothesis 2: I want to be a professor because it’s prestigious and I like the idea of having a prestigious job.
What kind of thought experiment could I do to test Hypothesis 2? As in the last experiment, there are many ways I could vary the “prestige” variable. I could say, “Imagine being a professor wasn’t prestigious...” But that’s the kind of change that’s hard to imagine. (In general, thought experiments of the sort “Imagine that our world was different in the following way(s)...” produce less reliable data than those that begin, “Imagine that tomorrow, X happens...” In other words, counterfactuals are harder to imagine than likely future changes.)
So the thought experiment I did instead was to ask myself: “Imagine that you’re offered a job that allows you to do all the research and/or teaching you want, but it’s not officially a professorship. It’s... I don’t know, let’s say, a ‘Lecturer’ position or something else that doesn’t sound very prestigious. In other words, you’re not allowed to call yourself a professor. Now how much do you want the job?” And if my answer is “Eh, not much,” then that suggests Hypothesis 2 was on the mark.
In addition to investigating your motivations, you can also use thought experiments to investigate your emotional reactions to people or situations. For example, I was once planning a potential vacation but feeling kind of blah about it when I envisioned myself on the trip. But was that “blah” reaction an accurate simulation of how much I would enjoy the trip once I was on it? Or might it be that I was just tired or in a funk for some other reason and therefore unable to imagine myself having fun? The thought experiment I did in this case is somewhat different: I imagined myself doing something that I knew, from repeated past experience, I always enjoyed (a friend’s annual Halloween bash). The result: I couldn’t imagine myself enjoying that either. Which indicated to me that the original hypothesis (“I can’t simulate myself enjoying this trip because this trip isn’t the sort of thing I would enjoy”) was probably false, and that the true hypothesis was more likely to be: “I can’t simulate myself enjoying this trip because my simulation module is temporarily broken.”
I’ll close with one more example of investigating your emotional reactions with thought experiments. A while back my roommate offered me a cookie from the box she had just bought. I felt a (mild, fleeting) pulse of irritation at her offer, because I was on a diet and she knew it — wasn’t it insensitive of her to offer me something she knew I wasn’t allowed to eat? At least, that was the explanation my brain gave me for why I felt irritated. Then I did a thought experiment: what if she had not offered me the cookie? How would I have reacted then? To my surprise and amusement, when I simulated that scenario, I still felt mildly irritated: “Is she not offering me a cookie because she knows I’m on a diet? Huh, that seems kind of paternalistic of her. Can’t she let me make my own choices about what I eat??” The fact that I would have felt irritation no matter how my roommate had handled that situation indicates that the irritation had more to do with the situation itself (want cookies, can’t have ‘em) than with anything my roommate did.
Next time: Using thought experiments to explore your values.
Friday, May 18, 2012
* The United States loses over $71 billion per year in tax exemptions for religious groups, according to a new article in Free Inquiry magazine.
* The U.S. Conference of Catholic Bishops is investigating the Girl Scouts because it’s not sure the group is Catholic enough. No, really.
* Taxpayer-funded crisis pregnancy centers are using religion to oppose abortion, according to the American Independent.
* Marriage has always been a relationship between one man and one woman, you say? Then apparently you haven’t read the Bible.
* T.M. Luhrmann discusses how liberals and Democrats might be able to reach out to the evangelical community this election.
* Surprise! A House Republican has introduced a bill that would protect Planned Parenthood funding.
* Who – if anyone – is to blame when robots make mistakes? RedOrbit reports.
* For the third year in a row, polling shows that a narrow majority of Americans consider gay and lesbian relations morally acceptable.
Thursday, May 17, 2012
* John Horgan hopes to change Sam Harris’ mind about free will.
* Laurie Santos does fascinating work in the realm of monkey economics.
* New studies confirm that red meat is bad for you and chocolate is good for you. Not so fast says Gary Taubes.
* Back in April 2000, Wired published a thought provoking piece by Bill Joy called “Why the Future Doesn’t Need Us.”
* In this entertaining TED talk, Joshua Foer talks about Feats of Memory Anyone Can Do.
Tuesday, May 15, 2012
My friend Benny (who produces the Rationally Speaking podcast) really hates the word “skepticism.” He understands and appreciates its meaning and long intellectual pedigree (heck, we even did a show on that!), but he also thinks — based on anecdotal evidence — that too many people apply a negative connotation to the term, often confusing it with cynicism. (And notice, to make things even more confusing, that neither modern term has the philosophical connotations that characterized the ancient skeptics and the ancient cynics!). On the contrary, I really like the word, and persist in using it in the positive sense adopted by David Hume (and, later, Carl Sagan): skepticism is a critical stance, especially toward notions that are either poorly supported by evidence or based on poor reasoning. As Hume famously put it, “A wise man ... proportions his belief to the evidence” (from which Carl Sagan’s famous “Extraordinary claims require extraordinary evidence”).
Now, why on earth would skeptics be associated with (the modern sense of) cynicism, an entirely negative attitude typical of people who take delight in criticism for the sake of criticism, negativity for the sake of negativity? I blame — at least in part — Francis Bacon. Let me explain.
Bacon was one of the earliest philosophers of science, and his main contribution was a book called The New Organon, in purposeful and daring contrast with Aristotle’s Organon. The latter is a collection of the ancient Greek’s works on logic, and essentially set down the parameters for science — such as it was — all the way to the onset of the scientific revolution in the 16th century. Bacon, however, would have none of Aristotle’s insistence on the superiority of deductive logic (which is, among other things, the basis of all mathematics). New knowledge is the result of reduction (explaining a complex phenomenon in terms of a simpler one) and induction (generalization from known cases). Bacon thought of his inductive method as having two components, which he called the pars destruens (the negative part) and the pars construens (the positive one). The first was concerned with eliminating — as far as possible — error, the second with the business of actually acquiring new knowledge.
It’s a nice idea, as long as one understands that the two partes are logically distinct and need not always come as a package (they did in Bacon’s treatise). Think of it in terms of the concept of division of cognitive labor in science. This is an idea famously discussed by Philip Kitcher, who explored the relevance of the social structure of science to its progress, arguing that such structure — once properly understood — can be improved upon to further the scientific enterprise. The basic idea, however, is familiar enough, even in everyday life: some people are good at X, others at Y, and we don’t ask everyone to be good at both, especially if X and Y are very different kinds of activities.
The same goes, I think, for Bacon’s partes destruens and construens: he may have pulled both off in the New Organon, but the more human knowledge progresses, the more it requires specialization. We have physicists and biologists, geologists and astronomers. Not only that: we have theoretical physicists and experimental ones, and even those are far too broad categories in the modern academy (e.g., theoretical atmospheric physics requires approaches that are very different from those deployed in, say, theoretical quantum mechanics). Why not, then, happily acknowledge that some people are better at constructing new knowledge (theoretical or empirical) and others at finding problems with what we think we know, or with how we currently proceed in attempting to know (Bacon’s correction of “errors”)? Indeed, this division of cognitive labor may even reflect different people’s temperaments, just like personal preference and style may lead one to pick a particular musical instrument rather than another one when playing in an orchestra (or to become a theoretical or experimental physicist, as the case may be).
What does any of the above have to do with the perception problem from which skepticism (allegedly) suffers? Well, skeptics (and, hum, philosophers!) are in the criticism business, and nobody likes to be criticized (including skeptics and philosophers). But we may cut some slack to critics if they also propose ways forward, constructive solutions to the problems they identify. This, I think, is a mistake. Criticism is valuable per se, as a way to engage our notions, show where they may go wrong, and help (other) people see ways forward. Criticism — pace Bacon — is inherently constructive, even when negative, because it allows us to make progress by identifying our errors and their causes. And it can be highly entertaining: just read a good (negative) movie, book or art review, or perhaps watch an episode of the (now ended) Bullshit! series.
This under-appreciated role of criticism, incidentally, may also be responsible (in part, i.e. egos and turf wars aside) for the continuing diatribes between philosophers and physicists, where too often the latter do not appreciate that the role of philosophy is a critical one, with the discipline making progress by eliminating mistaken notions rather than by discovering new facts (we’ve got science for the latter task, and it’s very good at it!).
So, my dear Benny and other fellow skeptics, let’s reclaim the term skepticism as one that encapsulates a fundamental attitude that all human beings interested in knowledge and truth should embrace: the idea that mistakes can be found and eliminated. It’s not at all a dirty job, and we are able and ready to do it.
Friday, May 11, 2012
Dateline: 1968. Cleve Backster, inventor of the polygraph, attaches lie detectors to some house plants and proceeds to yell at them. When his polygraphs register responses from the plants, Backster publishes a paper in the International Journal of Parapsychology (second only to “Weekly World News” in its academic rigor, I’d imagine) declaring that plants have perceptions and feelings. Thus do I have to waste at least fifteen minutes each semester explaining to students why plants don’t factor into utilitarian calculations. Thanks, Cleve.
Forty-four years later, the New York Times publishes an essay by philosopher Michael Marder in its “Stone” opinion column wherein the author insinuates that peas can ‘talk.’ Wonderful: there goes another fifteen minutes out of my virtue ethics lecture (1).
In the decade following its publication, Backster’s research into plant “primary perception” was very thoroughly (if not shockingly) debunked. Nevertheless, Marder helps himself to some suspiciously similar ideas towards the end of arguing against a clear moral distinction between eating plants and eating animals (2). The impetus for his argument is a recent finding (by Falik et al.) indicating that pea plants share stress-induced chemical signals through their root systems, thus triggering defensive responses in unstressed plants. Also, didn’t you see that adorable little peas-in-a-pod doll in Toy Story 3? Put down that can of pea soup, you monster.
I will admit that last part was a rhetorical flourish (Marder’s essay never mentions Toy Story 3, the latter of which has a more plausible narrative). Rhetoric is a dangerous tool when used towards ill effect, either intentionally or otherwise. It’s only fair, then, that we should look at some of the rhetoric at work in this latest round of pea-hugging.
Summarizing the original research, Marder describes plants as capable of “processing, remembering, and sharing information,” able to “draw on their ‘memories’” and engage in “basic learning.” Going back to the journal article to which he refers, we find that peas “eavesdrop” on their neighbors “in ways that have traditionally been attributed to higher organisms” (3). Can anyone be blamed for concluding, as Marder does, that “plants are more complex organisms than previously thought” in ways reminiscent of Backster’s “primary perception” research?
Honestly: no. That plants employ complex signaling systems and information storage mechanisms goes against many of our intuitions regarding the distinction between kingdoms Plantae and Animalia, from which derives Aristotle’s claim that animal telos includes sense perception where plant telos does not. This research is surprising in that respect, so it stands to reason that our intuitions should be adjusted in light of this surprising data.
Of course, the same idea was “unexpected” in 2008, when scientists at the National Center for Atmospheric Research found that walnut trees emit chemical signals that induce stress responses in their neighbors. It was also unexpected in 1990, when E. E. Farmer and C. A. Ryan found that tomato vines engage in “interplant communication.” It was probably also unexpected in 1935, when it was found that the concurrent ripening of all the fruit in a basket may be induced by a single fruit’s secretion of the gaseous aging hormone ethylene. This of course begs the question of why our intuitions need to be adjusted now, but didn’t eight decades ago.
As we say in this ‘biz, one man’s modus ponens is another man’s modus tollens (4). If chemical signaling in plants warrants re-evaluation of our moral attitudes towards plants, then such a re-evaluation would have been appropriate in 1935. But it wasn’t appropriate in 1935, so chemical signaling shouldn’t warrant any change in ethical attitudes now. Adam Kolber summarizes what I take to be the appropriate response to Marder’s moral argument here.
Still, the language used in Marder’s essay — to say nothing of the research paper that inspired it — is awfully suggestive, isn’t it? You wouldn’t want to eat something capable of basic learning and memory, would you? But that’s the real problem: plants aren’t really capable of any of those things. Only a crackpot (or an editor of the International Journal of Parapsychology, it seems) would suggest that peas actually talk or learn or remember in the sense that a human, or even a puppy, talks and learns and remembers (and, to be fair, Marder seems to admit as much). Chemical signaling in plants may resemble those activities in some important ways, and so we can use the terms “talk” and “remember” and “learn” to draw analogies with familiar concepts. The danger in drawing such analogies, and the fallacy in Marder’s moral argument, lies in overextending those analogies.
Every analogy has a breaking point. Life is like a box of chocolates in that it’s far more expensive for some people when Valentine’s Day rolls around; however, life is unlikely to have originated in Kansas City, Missouri. Similarly, animal communication and plant chemical signaling have in common the basic properties of signaling systems; however, those similarities only extend so far, and necessarily end where the animal nervous system comes into play.
Plants don’t have nerve cells, much less centralized clusters of those cells. Consequently, plants “talk” to each other only in the same strained way that we might claim that water communicates its deep-seated hatred to oil by keeping away from it. These are very basic chains of cause and effect determined entirely by fundamental physical properties of the signaler and the receiver. Once we throw a central nervous system in between the signaler and the receiver, things become a hell of a lot more complicated: we then have to consider questions of plasticity and cognition (5). And this is just for animals generally; we haven’t yet said anything about how fundamentally different animal cognition may be from human cognition. Any analogy between plant and animal communication must be very limited indeed.
I don’t mean to suggest that Marder or the authors of the original research paper are being disingenuous. We’re dealing with some high-falutin’ concepts here, and we only have so many words to work with. Referring to chemical signaling between pea root systems as “talking” may be strictly incorrect, but the reference does convey an essential property of the signaling system in a quick, close-enough sense. There’s certainly philosophical precedent for using everyday terms to refer to less-familiar scientific concepts.
But we have to be careful: science isn’t in the business of confirming our intuitions, and so our everyday terms may not be adequate to capture the weird, wild, wondrous things that science finds. It’s fascinating that peas can send warning signals to one another, but that doesn’t mean that those peas are talking, even if “talk” is the term that most easily describes what the peas are doing (6). Once we lose track of our linguistic limits, we fall prey to the dangers of rhetoric.
So don’t worry about peas. They don’t have any good feelings for you, but they’re certainly not talking about you behind your back, either.
(1) In developing his virtue theory, Aristotle employs the method of logical division in order to determine the unique human function. Rationality separates humans from other animals, but even before that cognitive activity must separate animals from other living things, i.e., plants. Of all the shocks that I’ve received in my years of teaching, one of the greatest is my students’ consistent opposition to that latter claim. Hell, the Mythbusters even covered it!
(2) The point of this essay isn’t to argue for or against any particular dietary choice, but I will disclose my own preferences: I’ve recently adopted vegetarianism, much to the dismay of my hamburger-loving tastebuds. Nevertheless, I don’t see anything wrong with eating meat per se; instead, the problems I see are with the production and distribution of most meat and the epistemic complications that arise from finding the meat that is ethically produced and distributed. Since I’m not willing to raise and slaughter my own livestock, vegetarianism seems the safest way to sleep soundly at night.
(3) I live according to a number of principles, including the following: always cock your eyebrow at a modern biologist who uses the phrase “higher organisms.” You’re in trouble as a biologist if both Stephen Jay Gould and Richard Dawkins would write dismissive essays about your ideas.
(4) Some people envision philosophers as socially awkward nerds. Can you believe it?
(5) You might argue that even a central nervous system is entirely determined by fundamental physical properties, but our illustrious host might have some words for you if you do.
(6) Obligatory dinosaur reference: it’s awesome, in a “grade-school-level-hilarious” kind of way, that sauropod flatulence may have contributed to an increase in global temperatures 150 million years ago, but that increase in global temperatures isn’t the same thing as the climate change phenomena we face today.
Wednesday, May 09, 2012
I recently read Brian Hayes’ wonderful collection of mathematically oriented essays called Group Theory In The Bedroom, and Other Mathematical Diversions. Not surprisingly, the book contained plenty of philosophical musings too. In one of the essays, called “Clock of Ages,” Hayes describes the intricacies of clock building and he provides some interesting historical fodder.
For instance, we learn that in the sixteenth century Conrad Dasypodius, a Swiss mathematician, could have chosen to restore the old Clock of the Three Kings in Strasbourg Cathedral. Dasypodius, however, preferred to build a new clock of his own rather than maintain an old one. Over two centuries later, Jean-Baptiste Schwilgue was asked to repair the clock built by Dasypodius, but he decided to build a new and better clock which would last for 10,000 years.
Did you know that a large-scale project is underway to build another clock that will be able to run with minimal maintenance and interruption for ten millennia? It’s called The 10,000 Year Clock and its construction is sponsored by The Long Now Foundation. The 10,000 Year Clock is, however, being built for more than just its precision and durability. If the creators’ intentions are realized, then the clock will serve as a symbol to encourage long-term thinking about the needs and claims of future generations. Of course, if all goes to plan, our future descendants will be left to maintain it too. The interesting question is: will they want to?
If history is any indicator, then I think you know the answer. As Hayes puts it: “The fact is, winding and dusting and fixing somebody else’s old clock is boring. Building a brand-new clock of your own is much more fun, especially if you can pretend that it’s going to inspire awe and wonder for the ages to come. So why not have the fun now and let the future generations do the boring bit.” I think Hayes is right, it seems humans are, by nature, builders and not maintainers.
Projects like The 10,000 Year Clock are often undertaken with the noblest of environmental intentions, but the old proverb is relevant here: the road to hell is paved with good intentions. What I find troubling, then, is that much of the environmental do-goodery in the world may actually be making things worse. It’s often nothing more than a form of conspicuous consumption, which is a term coined by the economist and sociologist Thorstein Veblen. When it pertains specifically to “green” purchases, I like to call it being conspicuously environmental. Let’s use cars as an example. Obviously it depends on how the calculations are processed, but in many instances keeping and maintaining an old clunker is more environmentally friendly than is buying a new hybrid. I can’t help but think that the same must be true of building new clocks.
In his book, The Conundrum, David Owen writes: “How appealing would ‘green’ seem if it meant less innovation and fewer cool gadgets — not more?” Not very, although I suppose that was meant to be a rhetorical question. I enjoy cool gadgets as much as the next person, but it’s delusional to believe that conspicuous consumption is somehow a gift to the environment.
Using insights from evolutionary psychology and signaling theory, I think there is also another issue at play here. Buying conspicuously environmental goods, like a Prius, sends a signal to others that one cares about the environment. But if it’s truly the environment (and not signaling) that one is worried about, then surely less consumption must be better than more. The homeless person ironically has a lesser environmental impact than your average yuppie, yet he is rarely recognized as an environmental hero. Using this logic I can’t help but conclude that killing yourself might just be the most environmentally friendly act of all time (if it wasn’t blatantly obvious, this is a joke). The lesson here is that we shouldn’t confuse smug signaling with actually helping.
The concern about conspicuous consumption, while entertaining, misses the larger epistemological issue though. If our climate does change significantly (and even if it changes from anthropomorphic causes), how do we know that this is a bad thing? To assert that the climate is changing because of humans, and that this is therefore bad, is simply begging the question.
According to many scientists, it’s a fact that the earth’s climate is changing from human influences. Let me bring your attention to another important fact though, i.e., the earth’s climate has also varied wildly historically, without human influence. So we shouldn’t worry about climate change per se, but about the sort of climate change that potentially poses an incredibly dangerous threat to our (and the planet's) wellbeing.
It’s also worth noting that human evolution has not magically stagnated. In their book, The 10,000 Year Explosion, University of Utah anthropologists Gregory Cochran and Henry Harpending argue that human evolution “is now happening about 100 times faster than its long-term average over the six million years of our existence.” I’m not suggesting that we purposefully destroy our environment, but isn’t it possible that future generations of humans (or even trans-humans) will evolve and adapt to an earth with a changed climate? If we claim to know what kind of environment future generations want, I think we are guilty of a particularly egregious form of epistemic hubris. Let’s let them build their own clocks.
Tuesday, May 08, 2012
In this installment the topics include: how much do works of fiction affect people's rationality, Bayesian vs. frequentist statistics, what is evidence, how much blame do people deserve when their actions increase the chance of them being targeted, time travel, and whether a philosophically examined life is a better life.
Also, all about rationality in the movies, from Dr. Who to Scooby-Doo.
Monday, May 07, 2012
Saturday, May 05, 2012
[Part I of this series appeared here]
Today, we discuss types of radiation and their health effects. If you’re learning something, it’s usually a good idea to take several passes at the knowledge. On the first pass (engineers would prefer ‘noughth’), you get the 5 second summary: “fire is hot;” “life forms have a common ancestor;” “the brake pedal is on the left.” Then you can go back and fill in the gaps in your knowledge on second, third, etc. passes, with a popular, basic summary, followed perhaps by a textbook or a course.
If we’re talking about the health effects of ionizing radiation, the first-pass lesson is “radiation is dangerous — minimize exposure.” Well, that’s better than nothing. But if we take one more pass, we might be able to get to a more useful understanding. What kinds are most dangerous? How much is too much? How can we minimize exposure?
By learning a bit more, we can avoid both underreaction (due to unrecognized dangers) and overreaction (due to inappropriate fear). By analogy, imagine if nobody had any clue that gasoline was flammable, but we all ran screaming away from votive candles!
It’s hard to present this material in a nice logical order, because there are many interdependencies. So I’ll just start somewhere.
Types of ionizing radiation
To call radiation “ionizing” is to say that it has the capacity to create ions in materials it comes in contact with, typically by (directly or indirectly) ripping electrons off of atoms. There are three commonly encountered types of ionizing radiation, referred to as α (alpha), β (beta) and γ (gamma).
Alpha radiation occurs when an unstable nucleus decays and ejects a high-energy alpha particle. It turns out that alpha particles are simply Helium nuclei: 2 protons and 2 neutrons. Accordingly, when an element undergoes an alpha decay, the result is an element with two less protons and two less neutrons: for example, Plutonium-239 alpha-decays to Uranium-235.
Because it has a charge of +2, the alpha particle is highly ionizing and therefore quite dangerous. However, alphas have a very short range in air (a few tens of millimeters) and are very easily blocked by, for example, a single piece of paper, or the layer of dead skin covering a person’s epidermis. Accordingly, external exposure to alpha particles is relatively safe — you could pick up a large sample of an alpha source such as plutonium with your bare hands and expect no ill effects. However, internal exposure via ingestion or inhalation of alpha-emitters is very dangerous, since inside our body there is no alpha-stopping barrier to protect our cells. You may recall the murder by poisoning of former KGB officer Aleksandr Litvinenko in London; he was killed by acute radiation sickness from ingestion of Polonium-210, a strong alpha emitter (see below for more on radiation sickness).
Beta decay comes in two varieties: β- and β+. These represent the ejection from the nucleus of an electron or positron (charge -1 or +1), respectively. β- decay turns a neutron into a proton; the electron must be ejected to maintain conservation of charge. Therefore, a β- decay leaves the atomic mass number unchanged, but moves the atom forward in the periodic table: for example, Thorium-231 undergoes a β- decay to Protactinium-231. For β+ decay, just the reverse is true.
Beta particles also ionize, though not as strongly as alphas. However, they do penetrate farther, and therefore represent a more serious health risk for external exposure. Betas have a typical range in air of 5-10 meters (depending on their energy) and can generally be blocked by, for example, a sheet of foil. Accordingly, they are not too difficult to shield oneself against externally, provided one is aware that they are present. A good example of a beta-emitter is Potassium-40, the largest source of natural radiation in human and animal bodies. Because Potassium-40 gives us an internal exposure, it is pretty much impossible to shield ourselves against it.
Gamma radiation is the most penetrating type of ionizing radiation, and represents high-energy photons — in other words, high-energy light. Gamma decays can be thought of as secondary decays, for when an alpha or beta decay occurs, the daughter nucleus is often left in an excited state of excess energy. This energy can be released by emission of a high-energy photon — a gamma.
These photons easily penetrate the entire human body, and can only be effectively stopped by a significant barrier, such as a thick block of lead (exactly how thick depends on their energy). Gammas occur naturally in cosmic rays, but they also show up in another guise — as X-rays. The differing terminology reflects their sources more than any qualitative difference: X-rays are from accelerating electrons, while gamma rays are from nuclear decay. Given the lack of meaningful distinction, I will refer to photons from both sources as gammas.
A good example of a gamma source is the medical isotope Caesium-137. Upon decaying to Barium-137, the nucleus is in an excited state, and releases two gamma rays, which ionize the surrounding tissue. Because cancer cells are more vulnerable to ionization than healthy cells, gamma radiation from decaying Caesium-137 is used for radiation therapy.
Alpha, beta and gamma radiation are the three most important and common types of ionizing radiation, and they all have one characteristic in common that is worth mentioning: they do not cause further radioactivity in the materials that they affect. This fairly elementary point is worth bearing in mind when, for example, you see controversy surrounding the use of irradiated produce. The radiation is used to kill bacteria, but it definitely will not cause your head of lettuce to become radioactive, any more than shooting somebody will cause them to emit bullets. However, it may induce subtle chemical changes that some groups claim (without much evidence) could be harmful.
The exception to this rule is neutron radiation, which can indeed transmute one element into another (usually radioactive) one in a process known as neutron activation. However, neutron radiation is very rare outside nuclear reactor cores. Typically, the only people who need be concerned with the health effects of neutron radiation are workers in nuclear plants — and then only when they are exposed to a chain reaction in progress. Finally, proton decays occur very rarely, in some natural elements, but are essentially irrelevant to discussions of nuclear safety.
Fallout refers to the release into the environment, not of radiation per se, but of radioactive sources. Unlike alpha, beta and gamma radiation, fallout does indeed contaminate affected materials with radioactive elements. Fallout may be from two main sources: nuclear weapons detonations (not the topic of these posts), and certain types of nuclear accidents, such as those at Chernobyl and Fukushima Dai-ichi.
A future post will look at what happens in a nuclear reactor more closely, but for the sake of discussing fallout we will simply note that when an atom of fissile material such as Uranium-235 breaks up in a fission reaction, the main products are (a) lots of heat you can use to make steam to turn a turbine, and (b) two highly radioactive fission fragments (for example, Krypton-89 and Barium-144 — the exact elements vary). You may recall from the first post the important fact that any randomly chosen combination of protons and neutrons is almost certainly radioactive. This applies to fission fragments, which are, in effect, selected quasi-randomly from the space of nuclides. These fragments, almost always highly radioactive, begin themselves to decay, and so do their daughters, until (eventually) the decays reach a relatively stable nuclide. After a reactor has been running for some time, these fission fragments and their offspring are highly concentrated in the fuel rods and the cladding that shields them.
In the event of a reactor fire, as for example at Chernobyl, fission fragments may be released into the atmosphere in smoke, resulting in fallout contamination over a large area downwind of the fire. Two of the most worrisome fallout particles are Iodine-131 and Caesium-137, due mostly to their ease of absorption in the body. Upon entering the body, these nuclides will emit radiation (almost always beta and gamma), causing internal radiation exposure.
Why is ionizing radiation bad?
Ionization disrupts cell chemistry, with three important potential health outcomes:
- Radiation sickness (essentially large-scale cell death);
- Cancer (uncontrolled cell growth, due in part to mutation);
- Genetic abnormalities (in descendants, due in part to mutation).
Radiation sickness is associated with a single, large exposure to radiation in a short period, and its symptoms develop on a timescale of hours to months. In mild cases (doses of about 0.5-1.5 sieverts), it leads to symptoms such as nausea and depressed white blood cell count (leukopenia). More severe exposures (2-4 sieverts) result in neurological problems and some fatalities (usually due to destruction of bone marrow), and high doses (of 8 sieverts or greater) are uniformly fatal. In cases of external exposure to radiation, severe skin damage and hair loss can result.
Broadly speaking, we can say that radiation sickness is caused by large-scale cell death, and that the severity of symptoms is a function of the body’s natural ability to cope with this cell death in a timely way. Doses below about 0.4 sieverts typically lead to no symptoms because the body is able to repair that quantity of damage without too much trouble, provided a person is otherwise healthy. However, higher doses increasingly overwhelm the body’s ability to repair all the damage — hence, the onset of more severe symptoms and death.
Radiation sickness is often referred to as a deterministic effect, as opposed to a stochastic effect (in the case of cancer and genetic abnormalities). This is because the dose a person receives determines the severity of their symptoms, not its probability. If you gave 100 people a one-time radiation dose of 2 sieverts (more on what a sievert is below), you’d pretty reliably get 100 cases of severe radiation sickness.
By contrast, cancer and genetic abnormality risk is referred to as stochastic, because an increased dose of radiation (say, an extra millisievert per year) increases the probability that a cancer will be contracted or a mutation passed on — there is no guarantee of any effect at all.
Because the vast majority of cancers are caused by effects other than manmade ionizing radiation, and because there is no specific signature of radiation-induced cancers as opposed to all other cancers, it is usually extremely difficult if not impossible to say that a given dose caused a given cancer. The increased cancer risk due to radiation is an extremely weak signal in the data (exceptions sometimes occur; for example, thyroid cancer is characteristic of exposure to Iodine-131, and in those cases the statistical signal is much stronger). The exact relationship between doses and cancer risk is highly controversial, and is discussed below.
There are a lot of units that are used to talk about radiation, or have been used historically, but since our concern is primarily with radiation’s effect on human tissue, the most important of these is is the unit of equivalent dose, the sievert (Sv). One typically talks in terms of microsieverts (μSv) or millionths of a sievert, and millisieverts (mSv) or one thousandth of a sievert (older sources use the unit rem, where 1 Sv = 100 rem).
It is extremely helpful to get a feel for the range of doses resulting from various activities, foods, diagnostic procedures, living locations and lifestyles — and for their relation to (a) radiation sickness, and (b) cancer risk. For that purpose, I recommend perusing xkcd’s wonderful infographic, which visually communicates things much better than I can do verbally.*
Note a few important reference points (mostly taken from the UNSCEAR, via Bodansky, pg. 74):
- The vast majority of the cumulative dose (hence, cancer-generating dose) a person receives over a given year is from natural sources, about 2.4 mSv/year. This is mostly from Radon-222 gas, a decay product of natural Uranium and Thorium that is in the air we breathe.**
- The runner-up for cumulative dose is medical diagnostics, which averages out to about 400 μSv/year per person.
- The extra dose for the average citizen from the nuclear fuel cycle and nuclear accidents is — by comparison — miniscule: 2-4 μSv/year, or something like one part in 1000 of the total yearly dose.
- Radiation sickness does not generally occur below a threshold of around 500 mSv, but this is already a very large dose as compared with levels that might be received by the general public, even in the event of a fallout situation like at Chernobyl or Fukushima. Accordingly, radiation sickness is a concern for nuclear workers, emergency responders etc., but not typically for the general public, even in the vicinity of a nuclear accident.
- The lowest dose linked to detectably increased cancer risk is 100-200 mSv/year — also a relatively high dose.
Because the effects of large, sudden doses are well-understood, there is little controversy surrounding radiation sickness. However, the link between radiation doses and cancer is more obscure and controversial.
What we know with some certainty is that doses above 100 mSv are clearly linked to increased cancer risk, and that the higher the dose climbs above 100 mSv, the greater the risk. Epidemiological studies (for example, of the survivors of Hiroshima and Nagasaki) have led to differing conclusions, even when performed based on the same data. However, typical risk coefficients obtained from such studies are in the range of 0.05 to 0.10 per Sv — meaning, very roughly, an extra 5-10% chance of fatal cancer from each sievert of radiation absorbed.
The difficulty is that these risk coefficients are based on data from the very high doses typical of a nuclear bomb or severe nuclear accident. This is understandable, because in such cases, cancer risk becomes a stronger statistical signal. However, there is no clear data on what the risk coefficient might be at much lower doses, despite many inconclusive low-dose studies. Sure, 100 mSv increases your risk of fatal cancer by about 1%, but does 1 mSv increase it by 0.01%, as would be the case if the effect were linear (risk = coefficient x dose), and had no threshold (any dose, no matter how small, increases your risk)?
In practice, what has happened is that public health officials have assumed the linear no-threshold (LNT) model as true, largely because it is a conservative assumption on which to base policy. However, it is highly contested, with some suggesting a threshold relation (no bad effects below, say, 50 mSv), and others proposing beneficial effects at low doses (hormesis). I personally think the LNT model is pretty plausible (it fits with what I know about mutations — the Russian-roulette-with-lots-of-chambers model), but it is important to understand that it is just a working assumption. This is unfortunate, because our estimates of the number of people killed by cancers from, say, Chernobyl fallout, has to have truly massive error bars — somewhere between a few tens (thyroid cancer made these statistically detectable) to several thousand. However, because I think the LNT model is probably approximately right, and because, for us as well, it serves as a conservative assumption, let us provisionally adopt it.
The linear-no-threshold assumption — that even very low doses increase cancer risk — generates some ‘paradoxes’ of utilitarian public policy. These are centered on the extreme disconnect between a population effect (like 1000 excess cancer deaths) that looks significant, versus a personal risk (like an extra 0.001% chance of getting a fatal cancer) that looks negligible. The population effect is usually calculated as a collective dose, in units of person-Sv. For example, if 20 people receive 0.1 Sv each, that’s a collective dose of 2 person-Sv.
Suppose some disaster is about to befall a medium-sized city, which will cause a uniform, one-time dose of 100 person-Sv spread over all 1.2 million citizens, or 80 μSv each (of course, it’s never quite this simple, but work with me). Should public officials temporarily evacuate the city?
Well, according to the linear no-threshold model, (100 person-Sv) x (0.10 excess fatal cancer risk/person-Sv) = ~10 excess cancer deaths. Those are real people who didn’t deserve to die, and assuming the evacuation doesn’t result in any deaths itself, they could be prevented by evacuating the city. A naive public official might do so.
On the other hand, if I analyze it from my perspective as a single citizen, 80 μSv corresponds roughly to the extra dose from a couple of high-altitude airline flights, and comes out to about a 0.0008% excess risk of fatal cancer, assuming LNT. Considering those odds, I would definitely choose to stay (assuming I knew the estimated dose to be accurate).
Now, consider further that certain areas have higher background radiation — Colorado being the textbook example. The extra dose from living in Colorado is about 400 μSv/year, due mostly to natural Uranium deposits. With a population of about 5 million, that’s around 2000 person-Sv/year, or (by LNT) an expected ~200 excess cancer deaths per year! (In fact, the cancer rate in Colorado is lower than the US national average — I’m not sure why. Lifestyle?)
Obviously, nobody is proposing evacuating Colorado. Barely anybody has even heard that Colorado has a higher background level. And yet Three Mile Island, the most infamous of North American nuclear accidents, is estimated by the relevant authorities to have released a collective dose of around 20 person-Sv, total.*** We will talk more about TMI in a later post on nuclear accidents.
It is not my purpose to trivialize these issues. All of these numbers represent real (although mostly unidentifiable) people who (probably) died instead of living.****
However, when considering the health effects of radiation from the nuclear industry, you need to be damned sure you’re at least being internally consistent (perhaps by doing some rough math), and remember the ugly but important fact that everything kills people. Driving kills people. Molasses kills people. Owning kittens kills people. Nuclear power kills... not many people, to put it mildly.
So if you’re willing to commute 10000 km/year instead of taking the bus, blithely accepting the additional ~0.008% probability of death that that implies, plus the other drawbacks such as pollution, then all other things being equal it just doesn’t make sense to grandstand about the dangers of a garden-variety nuclear plant — unless driving is literally infinitely more fun than having power for your iPad and your grandma’s respirator.
It is especially insane if you do not simultaneously grandstand much more loudly about a lot of other things, like the coal plant that will, with near inevitability, replace the base load your nuclear plant would have generated. Even ignoring the other health effects of coal plants, they release at least as much radioactive material as nuclear plants, and usually more (fly ash contains radioisotopes like Uranium and Thorium).
...Boy, am I ever getting ahead of myself. A future post will discuss death rates per unit energy for the various power generation methods, as one useful figure of merit. But next time, we discuss how nuclear reactors work.
* However, the well-known ‘banana equivalent dose’ mentioned here is disputed, and probably a fair bit less than 0.1 μSv. This is a bit of an old chestnut and people promoting nuclear power really ought to stop quoting it.
** Some sources, especially anti-nuclear ones, cite a lower (wrong) figure of 1 mSv/year. This is based on ignoring the effects of Radon-222, apparently in order to make doses due to nuclear power seem larger in comparison.
*** However, this is disputed by anti-nuclear folks; see e.g., Caldicott, p. 65. Several claim that doses were high enough to cause radiation sickness in multiple victims. See Wing, who lists the sources.
**** One major problem that I have with popular works critical of the nuclear industry, particularly Caldicott’s book, is the absence of relevant qualifiers about probability. For example, on pg. 61, Caldicott says of Plutonium that it “is so toxic and carcinogenic that less than one-millionth of a gram if inhaled will cause lung cancer.” Of course, a statistically sophisticated reader will balk at the “will” in that sentence, such an unqualified intimation of certain death, even if they know nothing about Plutonium. But many of Caldicott’s readers likely swallow this whole. In fact, according to a paper from Lawrence Livermore National Laboratory, the actual excess fatal cancer risk resulting from inhaling 1 μg (quite a bit of Plutonium) is about 1.2%.
* David Bodansky, “Nuclear Energy: Principles, Practices and Prospects.” 2004, Springer.
* John R. Lamarsh & Anthony J. Baratta, “Introduction to Nuclear Engineering.” 2001, Prentice Hall.
* Helen Caldicott, “Nuclear Power is Not the Answer.” 2006, Westchester Book Group.
* A useful source on the relation of dose to cancer risk (note: 1 Sv = 100 rem).
* On coal vs nukes.